On Tue, Jun 16, 2020 at 2:00 PM Boris Behrens <bb@xxxxxxxxx> wrote: > > See inline comments > > Am Di., 16. Juni 2020 um 13:29 Uhr schrieb Zhenshi Zhou <deaderzzs@xxxxxxxxx>: > > > > I did this on my cluster and there was a huge number of pg rebalanced. > > I think setting this option to 'on' is a good idea if it's a brand new cluster. > > > On our new cluster we enabled them, but not on our primary cluster > with the most. > > Dan van der Ster <dan@xxxxxxxxxxxxxx> 于2020年6月16日周二 下午7:07写道: > > > > Could you share the output of > > > > ceph osd pool ls detail > > pool 1 'pool 1' replicated size 3 min_size 2 crush_rule 0 object_hash > rjenkins pg_num 256 pgp_num 256 autoscale_mode warn last_change > 2318859 flags hashpspool min_write_recency_for_promote 1 stripe_width > 0 application rbd > pool 3 'pool 3' replicated size 3 min_size 2 crush_rule 0 object_hash > rjenkins pg_num 16384 pgp_num 16384 autoscale_mode warn last_change > 2544040 lfor 0/0/1952329 flags hashpspool,selfmanaged_snaps > min_write_recency_for_promote 1 stripe_width 0 application rbd > pool 4 'pool 4' replicated size 3 min_size 2 crush_rule 0 object_hash > rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 2318859 > flags hashpspool min_write_recency_for_promote 1 stripe_width 0 > application rbd > pool 5 'pool 5' replicated size 3 min_size 2 crush_rule 0 object_hash > rjenkins pg_num 256 pgp_num 256 autoscale_mode warn last_change > 2318859 flags hashpspool,selfmanaged_snaps > min_write_recency_for_promote 1 stripe_width 0 application rbd > OK now maybe share the output of `ceph df` so we can see how much data is in each pool? Assuming that the majority of your data is in 'pool 3' with 16384 PGs, then your current PG values are just fine. (You should have around 110 PGs per OSD). The pg_autoscaler aims for 100 per OSD and doesn't make changes unless a pool has 4x too few or too many PGs. Unless you are planning to put a large proportion of data into the other pools, I'd leave pg_autoscaler disabled and move on to the next task. -- Dan > the mgr module is not enabled yet. > > > > > ? > > > > This way we can see how the pools are configured and help recommend if > > pg_autoscaler is worth enabling. > > > > -- > Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend > im groüen Saal. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx