Currently, the general recommendation is to change the scheduler back
to wpq instead of mclock, this allows you to control backfill
behaviour with the usual configs like osd_recovery_max_active,
osd_recovery_max_active_hdd, osd_recovery_max_active_ssd,
osd_max_backfills.
Zitat von Nicola Mori <mori@xxxxxxxxxx>:
Dear Ceph users
yesterday I added a new host to my cluster. This triggered a
rebalance and now backfill is in progress. What I'm seeing is that
many PGs are in backfill_wait state, and just a few are actually
backfilling:
pgs: 35081735/215211118 objects misplaced (16.301%)
348 active+remapped+backfill_wait
174 active+clean
7 active+remapped+backfilling
I prioritized the recovery by setting the high_recovery_ops
scheduler profile, but this just increased the recovery rate a bit
while not incrementing the number of actually backfilling PGs.
Is there a way to increment the number of PGs that backfill at the
same time? I guess this should speed up the operations, but I didn't
find a way to do so. Currently I have very little load from users so
I can devote all the resources to the recovery.
Using Ceph 19.2.0 managed by cephadm.
Thanks,
Nicola
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx