mon_warn_not_scrubbed
andmon_warn_not_deep_scrubbed
have been renamed. They are nowmon_warn_pg_not_scrubbed_ratio
and mon_warn_pg_not_deep_scrubbed_ratio
respectively. This is to clarify that these warnings are related to
pg scrubbing and are a ratio of the related interval. These options
are now enabled by default.
Hi Muthu
We found the same issue near 2000 pgs not deep-scrubbed in time.
We’re manually force scrubbing with :
ceph health detail | grep -i not | awk '{print $2}' | while read i; do ceph pg deep-scrub ${i}; done
It launch near 20-30 pgs to be deep-scrubbed. I think you can improve with a sleep of 120 secs between scrub to prevent overload your osd.
For disable deep-scrub you can use “ceph osd set nodeep-scrub” , Also you can setup deep-scrub with threshold .
#Start Scrub 22:00
osd scrub begin hour = 22
#Stop Scrub 8
osd scrub end hour = 8
#Scrub Load 0.5
osd scrub load threshold = 0.5
Regards,
Manuel
De: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> En nombre de nokia ceph
Enviado el: martes, 14 de mayo de 2019 11:44
Para: Ceph Users <ceph-users@xxxxxxxxxxxxxx>
Asunto: ceph nautilus deep-scrub health error
Hi Team,
After upgrading from Luminous to Nautilus , we see 654 pgs not deep-scrubbed in time error in ceph status . How can we disable this flag? . In our setup we disable deep-scrubbing for performance issues.
Thanks,
Muthu
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com