On Sun, Feb 12, 2023 at 20:24 Chris Dunlop <chris@xxxxxxxxxxxx> wrote:
Is this "sawtooth" pattern of remapped pgs and misplaced objects a normal
consequence of adding OSDs?
On Sun, Feb 12, 2023 at 10:02:46PM -0800, Alexandre Marangone wrote:
This could be the pg autoscaler since you added new OSDs. You can run ceph
osd pool ls detail and check the pg_num and pg_target numbers iirc to
confirm
$ ceph osd pool ls detail
... pgp_num 46 pgp_num_target 128 ...
That indeed explains it - thanks!
OK, now I need to find out more about the pg autoscaler:
https://docs.ceph.com/en/latest/rados/operations/placement-groups/
For starters:
$ ceph osd pool autoscale-status | grep -e ^POOL -e ^rbd.ec.data
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK
rbd.ec.data 61058G 100.0T 1.375 208.0T 0.6610 1.0 128 on False
Looks like I maybe should have created that pool with the "--bulk" flag,
per https://docs.ceph.com/en/latest/rados/operations/placement-groups/
https://docs.ceph.com/en/latest/rados/operations/placement-groups/
--
The autoscaler uses the bulk flag to determine which pool should start
out with a full complement of PGs and only scales down when the usage
ratio across the pool is not even. However, if the pool doesn’t have the
bulk flag, the pool will start out with minimal PGs and only when there is
more usage in the pool.
I'll wonder if setting "bulk" now will help get stable faster?
$ ceph osd pool set <pool-name> bulk true
Cheers,
Chris
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx