On 6/27/22 09:22, Marcus Müller wrote:
Hi Stefan,
thanks for the fast reply. I did some research and have the following
output:
*~ $ rados list-inconsistent-pg {pool-name1}*
[]
* ~ $ rados list-inconsistent-pg {pool-name2}*
[]
* ~ $ rados list-inconsistent-pg {pool-name3}*
[]
—
* ~ $ rados list-inconsistent-obj 7.989*
{"epoch":3006349,"inconsistents":[]}
*~ $ rados list-inconsistent-obj 7.28f*
{"epoch":3006337,"inconsistents":[]}
* ~ $ rados list-inconsistent-obj 7.603*
{"epoch":3006329,"inconsistents":[]}
* ~ $ ceph config dump |grep osd_scrub_auto_repair *
Is empty
* $ ceph daemon mon.ceph4 config get osd_scrub_auto_repair*
{
"osd_scrub_auto_repair": "true"
}
What does this tell me know?
That an admin has changed this value, and it's repairing automatically
(that's totally fine of course).
Setting can be changed to false of course,
but as list-inconsistent-obj shows something, I would like to find the
reason for that first.
You might set debug_osd = 20/20 so and initiate a repair. It might log
why it's repairing: I guess checksum mismatch. Not sure if it helps in
finding the root cause though.
Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx