Re: Problems with long taking deep-scrubbing processes causing PG_NOT_DEEP_SCRUBBED

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



One way this can happen is if you have the default setting

	osd_scrub_during_recovery=false

If you’ve been doing a lot of [re]balancing, drive replacements, topology changes, expansions, etc. scrubs can be starved especially if you’re doing EC on HDDs.

HDD or SSD OSDs?  Replication or EC?

Number of OSDs? Number of PGs? Values of osd_scrub_max interval and osd_deep_scrub_interval ?

— aad

> On Jul 31, 2020, at 10:52 AM, ceph@xxxxxxxxxx wrote:
> 
> What happen when you do start a scrub manual?
> 
> Imo 
> 
> ceph osd deep-scrub xyz
> 
> Hth
> Mehmet 
> 
> 
> Am 31. Juli 2020 15:35:49 MESZ schrieb Carsten Grommel - Profihost AG <c.grommel@xxxxxxxxxxxx>:
>> Hi,
>> 
>> we are having problems with really long taking deep-scrubb processes 
>> causing PG_NOT_DEEP_SCRUBBED and ceph HEALTH_WARN. One ph is waiting 
>> since 2020-05-18 for the deep-scrubb.
>> 
>> Is there any way to speed up the deep-scrubbing?
>> 
>> Ceph-Version:
>> 
>> ceph version 14.2.8-3-gc6b8eedb77 
>> (c6b8eedb771089fe3b0a95da93158ec4144758f3) nautilus (stable)
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux