Hello Laimis, To clarify, Squid reduced osd_scrub_chunk_max from 25 to 15 to limit the impact on client I/Os which may had led to increased (deep)scrubbing times. My advise was to raise this value back to 25 and see the influence of this change. But clearly, this is a more serious matter. Thank you for creating tracker [1]. I'll do my best to ensure it gets the appropriate visibility. Cheers, Frédéric. [1] https://tracker.ceph.com/issues/69078 ----- Le 28 Nov 24, à 22:58, Laimis Juzeliūnas laimis.juzeliunas@xxxxxxxxxx a écrit : > Hi all, sveikas, > > Thanks everyone for the tips and trying to help out! > I've eventually raised a bug tracker for the case to get more developers > involved: https://tracker.ceph.com/issues/69078 > > We tried decreasing osd_scrub_chunk_max from 25 to 15 as per Frédéric > suggestion, but unfortunately did not observe any signs relief. One Squid user > in reddit community thread confirmed the same after decreasing - no results. > There are more users there in the thread that tried out various cluster > configuration tunings, including osd_mclock_profile with high_recovery_ops but > still no one managed to get any good results. > > Our scrub cycle runs 24/7 with no time windows/schedules therefore no > possibilities of queue buildups due to time constrains. > And yes - our longest running pg now is 23 days in the deep scrub (and still > counting). > > > Laimis J. > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx