* Try applying the settings to global so that mons/mgrs get them. * Set your shallow scrub settings back to the default. Shallow scrubs take very few resources * Set your randomize_ratio back to the default, you’re just bunching them up * Set the load threshold back to the default, I can’t imagine any OSD node ever having a load < 0.3, you’re basically keeping scrubs from ever running * osd_deep_scrub_interval is the only thing you should need to change. > On Mar 5, 2024, at 2:42 AM, Nicola Mori <mori@xxxxxxxxxx> wrote: > > Dear Ceph users, > > in order to reduce the deep scrub load on my cluster I set the deep scrub interval to 2 weeks, and tuned other parameters as follows: > > # ceph config get osd osd_deep_scrub_interval > 1209600.000000 > # ceph config get osd osd_scrub_sleep > 0.100000 > # ceph config get osd osd_scrub_load_threshold > 0.300000 > # ceph config get osd osd_deep_scrub_randomize_ratio > 0.100000 > # ceph config get osd osd_scrub_min_interval > 259200.000000 > # ceph config get osd osd_scrub_max_interval > 1209600.000000 > > In my admittedly poor knowledge of Ceph's deep scrub procedures, these settings should spread the deep scrub operations in two weeks instead of the default one week, lowering the scrub frequency and the related load. But I'm currently getting warnings like: > > [WRN] PG_NOT_DEEP_SCRUBBED: 56 pgs not deep-scrubbed in time > pg 3.1e1 not deep-scrubbed since 2024-02-22T00:22:55.296213+0000 > pg 3.1d9 not deep-scrubbed since 2024-02-20T03:41:25.461002+0000 > pg 3.1d5 not deep-scrubbed since 2024-02-20T09:52:57.334058+0000 > pg 3.1cb not deep-scrubbed since 2024-02-20T03:30:40.510979+0000 > . . . > > I don't understand the first one, since the deep scrub interval should be two weeks so I don''t expect warnings for PGs which have been deep-scrubbed less than 14 days ago (at the moment I'm writing it's Tue Mar 5 07:39:07 UTC 2024). > > Moreover, I don't understand why the deep scrub for so many PGs is lagging behind. Is there something wrong in my settings? > > Thanks in advance for any help, > > Nicola > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx