Hello Eugen Thanks for your reply. ceph osd set nodeep-scrub is not stopping if repairs are running. Reapir started another set for deepscrub+repair which is not controlled using this command. When I started my cluster utilization as 74% and when it finished now my cluster is showing 43%(surprising figure, does a repair can shuffle that much? Even my osd’s where between 60-83%) utilized. Before starting repair I increased pool pg’s manually from 1024 to 2048 but my pool only came down to 84%, so I thought to run pool repair.. During the repair and high utilization my cluster was crying and giving SLOW OSD Communication from osd.x to osd.y and many number of times. Not sure what was it, how utilization came that much down, If repair can change thing this much then it should have command to pause/unpause too. Regards Dev > On Jan 25, 2025, at 1:15 AM, Eugen Block <eblock@xxxxxx> wrote: > > But they would only reveal inconsistent PGs during deep-scrub. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx