Re: Squid: deep scrub issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



No, Marc. The recommended value is always the one that devs agreed on at a point in time.
Keep it to the defaults.

Frédéric.

________________________________
De : Marc <Marc@xxxxxxxxxxxxxxxxx>
Envoyé : samedi 30 novembre 2024 22:49
À : Frédéric Nass; Laimis Juzeliūnas
Cc: ceph-users 
Objet : RE: Squid: deep scrub issues

So is this recommend for all new squid clusters? 

osd_scrub_chunk_max from 25 


> 
> To clarify, Squid reduced osd_scrub_chunk_max from 25 to 15 to limit the 
> impact on client I/Os which may had led to increased (deep)scrubbing 
> times. 
> My advise was to raise this value back to 25 and see the influence of 
> this change. But clearly, this is a more serious matter. 
> 
> Thank you for creating tracker [1]. I'll do my best to ensure it gets 
> the appropriate visibility. 
> 
> Cheers, 
> Frédéric. 
> 
> [1] https://tracker.ceph.com/issues/69078 
> 
> ----- Le 28 Nov 24, à 22:58, Laimis Juzeliūnas 
> laimis.juzeliunas@xxxxxxxxxx a écrit : 
> 
> > Hi all, sveikas, 
> > 
> > Thanks everyone for the tips and trying to help out! 
> > I've eventually raised a bug tracker for the case to get more 
> developers 
> > involved: https://tracker.ceph.com/issues/69078 
> > 
> > We tried decreasing osd_scrub_chunk_max from 25 to 15 as per Frédéric 
> > suggestion, but unfortunately did not observe any signs relief. One 
> Squid user 
> > in reddit community thread confirmed the same after decreasing - no 
> results. 
> > There are more users there in the thread that tried out various 
> cluster 
> > configuration tunings, including osd_mclock_profile with 
> high_recovery_ops but 
> > still no one managed to get any good results. 
> > 
> > Our scrub cycle runs 24/7 with no time windows/schedules therefore no 
> > possibilities of queue buildups due to time constrains. 
> > And yes - our longest running pg now is 23 days in the deep scrub (and 
> still 
> > counting). 
> > 
> > 
> > Laimis J. 
> > _______________________________________________ 
> > ceph-users mailing list -- ceph-users@xxxxxxx 
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx 
> _______________________________________________ 
> ceph-users mailing list -- ceph-users@xxxxxxx 
> To unsubscribe send an email to ceph-users-leave@xxxxxxx 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux