On Wed, Jun 29, 2022 at 9:55 PM Stefan Kooman <stefan@xxxxxx> wrote: > On 6/29/22 19:34, Curt wrote: > > Hi Stefan, > > > > Thank you, that definitely helped. I bumped it to 20% for now and that's > > giving me around 124 PGs backfilling at 187 MiB/s, 47 Objects/s. I'll > > see how that runs and then increase it a bit more if the cluster handles > > it ok. > > > > Do you think it's worth enabling scrubbing while backfilling? > > If the cluster can cope with the extra load, sure. If it slows down the > backfilling to levels that are too slow ... temporarily disable it. > > Since > > this is going to take a while. I do have 1 inconsistent PG that has now > > become 10 as it splits. > > Hmm. Well, if it finds broken PGs, for sure pause backfilling (ceph osd > set nobackfill) and have it handle this ASAP: ceph pg repair $pg. > Something is wrong, and you want to have this fixed sooner rather than > later. > When I try to run a repair nothing happens, if I try to list inconsistent-obj I get No scrub information available for 12.12. If I tell it to run a deep scrub, nothing. I'll set debug and see what I can find in the logs. > > Not sure what hardware you have, but you might benefit from disabling > write caches, see this link: > > https://docs.ceph.com/en/quincy/start/hardware-recommendations/#write-caches > > Gr. Stefan > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx