Re: OSD tries (and fails) to scrub the same PGs over and over

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

what's the cluster status? Is there recovery or backfilling going on?


Zitat von Vladimir Brik <vladimir.brik@xxxxxxxxxxxxxxxx>:

I have a PG that hasn't been scrubbed in over a month and not deep-scrubbed in over two months.

I tried forcing with `ceph pg (deep-)scrub` but with no success.

Looking at the logs of that PG's primary OSD it looks like every once in a while it attempts (and apparently fails) to scrub that PG, along with two others, over and over. For example:

2023-07-19T16:26:07.082 ... 24.3ea scrub starts
2023-07-19T16:26:10.284 ... 27.aae scrub starts
2023-07-19T16:26:11.169 ... 24.aa scrub starts
2023-07-19T16:26:12.153 ... 24.3ea scrub starts
2023-07-19T16:26:13.346 ... 27.aae scrub starts
2023-07-19T16:26:16.239 ... 24.aa scrub starts
...

Lines like that are repeated throughout the log file.


Has anyone seen something similar? How can I debug this?

I am running 17.2.5


Vlad
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux