fixing a bad PG per OSD decision with pg-autoscaling?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Due to a gross miscalculation several years ago I set way too many PGs for our original Hammer cluster. We've lived with it ever since, but now we are on Luminous, changes result in stuck-requests and balancing problems.

The cluster currently has 12% misplaced, and is grinding to re-balance but is unusable to clients (even with osd_max_pg_per_osd_hard_ratio set to 32, and mon_max_pg_per_osd set to 1000).

Can I safely press on upgrading to Nautilus in this state so I can enable the pg-autoscaling to finally fix the problem?

thanks.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux