Re: pgs not deep-scrubbed in time and pgs not scrubbed in time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Götz,

usually, OSDs start (deep-)scrubbing PGs after they have been powered on. You should see PGs in (deep-)scrubbing state right now. Depending on your PG sizes, number of OSDs etc., that can take some time, of course. But eventually the number should decrease over time. If you have the defaults for deep_scrub_interval (1 week) and your cluster hasn't complained before, you'll probably get rid of the warning within a week or so. If you want to speed things up and your cluster can handle the load, you could temporarily increase osd_max_scrubs (max concurrent scrubs on a single OSD, default 1, can be updated at runtime):

ceph config set osd osd_max_scrubs 2

It doesn't sound like you had this warning before, so I assume it will eventually clear. If not, you can check out the docs [0] and my recent blog post [1] about this topic.

Regards,
Eugen

[0] https://docs.ceph.com/en/latest/rados/operations/health-checks/#pg-not-deep-scrubbed [1] https://heiterbiswolkig.blogs.nde.ag/2024/09/06/pgs-not-deep-scrubbed-in-time/

Zitat von Götz Reinicke <goetz.reinicke@xxxxxxxxxxxxxxx>:

Hello Ceph Community,

My cluster was hit by a power outage some month ago. Luckily no data was destroyed and powering up the nodes and services went well.

But till than some pgs are still shown as not scrubbed in time. Googling and searching the list showed some debugging hints like „ceph pg deep-scrub“ the pgs or restarting osd deamons.

Nothing „solved“ that issue here. I’m on ceph version 18.2.4 now.

Is there anything special what I can do to have thous pgs scrubbed? I like having the cluster health state ok not warning :) Or will time solve the problem when the pgs are in there regular cycle for being scrubbed again?


	Thanks for hints and suggestion . Best regards Götz


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux