On Wed, Jun 29, 2022 at 11:22 PM Curt <lightspd@xxxxxxxxx> wrote: > > > On Wed, Jun 29, 2022 at 9:55 PM Stefan Kooman <stefan@xxxxxx> wrote: > >> On 6/29/22 19:34, Curt wrote: >> > Hi Stefan, >> > >> > Thank you, that definitely helped. I bumped it to 20% for now and >> that's >> > giving me around 124 PGs backfilling at 187 MiB/s, 47 Objects/s. I'll >> > see how that runs and then increase it a bit more if the cluster >> handles >> > it ok. >> > >> > Do you think it's worth enabling scrubbing while backfilling? >> >> If the cluster can cope with the extra load, sure. If it slows down the >> backfilling to levels that are too slow ... temporarily disable it. >> >> Since >> > this is going to take a while. I do have 1 inconsistent PG that has now >> > become 10 as it splits. >> >> Hmm. Well, if it finds broken PGs, for sure pause backfilling (ceph osd >> set nobackfill) and have it handle this ASAP: ceph pg repair $pg. >> Something is wrong, and you want to have this fixed sooner rather than >> later. >> > > When I try to run a repair nothing happens, if I try to list > inconsistent-obj I get No scrub information available for 12.12. If I tell > it to run a deep scrub, nothing. I'll set debug and see what I can find in > the logs. > Just to give a quick update. This one was my fault, I missed a flag. Once set correctly, scrubbed and repaired. It's now back to adding more PG's, which continue to get a bit faster as it expands. I'm now up to pg_num 1362 and pgp_num 1234, with backfills happening at 250-300 Mb/s 60-70 Objects/s. Thanks for all the help. > >> Not sure what hardware you have, but you might benefit from disabling >> write caches, see this link: >> >> https://docs.ceph.com/en/quincy/start/hardware-recommendations/#write-caches >> >> Thanks, I'm disabling cache and I'll see if it helps at all. > Gr. Stefan >> > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx