Hi there.
Yesterday I caught that error:
PG_DAMAGED Possible data damage: 2 pgs snaptrim_error
pg 11.9 is active+clean+snaptrim_error, acting [196,167,32]
pg 11.127 is active+clean+snaptrim_error, acting [184,138,1]
May it be because the scrub was done when the snapshots were cleaned up?
I tried to restart OSD, then I run deep-scrub and repair, but it didn't solve the problem.
In the documentation the page "Repairing PG inconsistencies" is empty - http://docs.ceph.com/docs/mimic/rados/operations/pg-repair/,
so I don't know, what else can I do?
Cluster info;
vaersion 12.2.5
25 OSD nodes
12 OSD per node. The most of them still have filestore as storage backend.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com