Hi Dave,
We have checked the hardware and it seems fine.
The same OSDs host numerous other PGs which are unaffected by this issue.
All of the OSDs reported as inconsistent/repair_failed belong to the
same metadata pool.
We did run a `ceph repair` on the initially which is when the "to many
repaired reads" error popped up I think.
Cheers,
Pascal
Dave Holland wrote on 24.06.22 11:25:
Hi,
I can't comment on the CephFS side but "Too many repaired reads on 2
OSDs" makes me suggest you check the hardware -- when I've seen that
recently it was due to failing HDDs. I say "failing" not "failed"
because the disks were giving errors on a few sectors but most I/O was
working OK, so neither Linux nor Ceph ejected the disk; and repeated
PG repair attempts were unsuccessful.
Dave
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx