Re: pgs not deep-scrubbed in time and pgs not scrubbed in time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Eugen, dear Joachim,

Thanks for your feedback and input. The amount of PGs stuck is +-1 the same around 30 in total. The most PGs are located on the same OSD from what I see in the detail. And most of them are listed in scrub and deep-scrub.

Most are from not scrubbed since end of August …

I’ll look into your links and see what might help.

Thanks once more and regards . Götz


Am 23.10.2024 um 08:35 schrieb Götz Reinicke <goetz.reinicke@xxxxxxxxxxxxxxx>:

Hello Ceph Community,

My cluster was hit by a power outage some month ago. Luckily no data was destroyed and powering up the nodes and services went well. 

But till than some pgs are still shown as not scrubbed in time. Googling and searching the list showed some debugging hints like  „ceph pg deep-scrub“ the pgs or restarting osd deamons.

Nothing „solved“ that issue here. I’m on ceph version 18.2.4 now.

Is there anything special what I can do to have thous pgs scrubbed? I like having the cluster health state ok not warning :) Or will time solve the problem when the pgs are in there regular cycle for being scrubbed again?


Thanks for hints and suggestion . Best regards Götz


Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux