Do you have debug logs from that OSD?
I only saw that during a recent upgrade to Pacific, the PGs in that
cluster are huge and adopting OSDs and restarts after the bluestore
quick-fixes interrupted the scrubbing, so that was the explanation for
that.
Zitat von Vladimir Brik <vladimir.brik@xxxxxxxxxxxxxxxx>:
what's the cluster status? Is there recovery or backfilling
going on?
No. Everything is good except this PG is not getting scrubbed.
Vlad
On 7/21/23 01:41, Eugen Block wrote:
Hi,
what's the cluster status? Is there recovery or backfilling going on?
Zitat von Vladimir Brik <vladimir.brik@xxxxxxxxxxxxxxxx>:
I have a PG that hasn't been scrubbed in over a month and not
deep-scrubbed in over two months.
I tried forcing with `ceph pg (deep-)scrub` but with no success.
Looking at the logs of that PG's primary OSD it looks like every
once in a while it attempts (and apparently fails) to scrub that
PG, along with two others, over and over. For example:
2023-07-19T16:26:07.082 ... 24.3ea scrub starts
2023-07-19T16:26:10.284 ... 27.aae scrub starts
2023-07-19T16:26:11.169 ... 24.aa scrub starts
2023-07-19T16:26:12.153 ... 24.3ea scrub starts
2023-07-19T16:26:13.346 ... 27.aae scrub starts
2023-07-19T16:26:16.239 ... 24.aa scrub starts
...
Lines like that are repeated throughout the log file.
Has anyone seen something similar? How can I debug this?
I am running 17.2.5
Vlad
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx