Lots of PGs in backfill_wait state

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Ceph users

yesterday I added a new host to my cluster. This triggered a rebalance and now backfill is in progress. What I'm seeing is that many PGs are in backfill_wait state, and just a few are actually backfilling:

    pgs:     35081735/215211118 objects misplaced (16.301%)
             348 active+remapped+backfill_wait
             174 active+clean
             7   active+remapped+backfilling

I prioritized the recovery by setting the high_recovery_ops scheduler profile, but this just increased the recovery rate a bit while not incrementing the number of actually backfilling PGs.

Is there a way to increment the number of PGs that backfill at the same time? I guess this should speed up the operations, but I didn't find a way to do so. Currently I have very little load from users so I can devote all the resources to the recovery.

Using Ceph 19.2.0 managed by cephadm.

Thanks,

Nicola
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux