inconsistent pg on erasure coded pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We have some inconsistency / scrub error on a Erasure coded pool, that I can't seem to solve.

[root@osd008 ~]# ceph health detail
HEALTH_ERR 1 pgs inconsistent; 1 scrub errors
pg 5.144 is active+clean+inconsistent, acting [81,119,148,115,142,100,25,63,48,11,43]
1 scrub errors

In the log files, it seems there is 1 missing shard:

/var/log/ceph/ceph-osd.81.log.2.gz:2017-10-02 23:49:11.940624 7f0a9d7e2700 -1 log_channel(cluster) log [ERR] : 5.144s0 shard 63(7) missing 5:2297a2e1:::10014e2d8d5.00000000:head /var/log/ceph/ceph-osd.81.log.2.gz:2017-10-03 00:48:06.681941 7f0a9d7e2700 -1 log_channel(cluster) log [ERR] : 5.144s0 deep-scrub 1 missing, 0 inconsistent objects /var/log/ceph/ceph-osd.81.log.2.gz:2017-10-03 00:48:06.681947 7f0a9d7e2700 -1 log_channel(cluster) log [ERR] : 5.144 deep-scrub 1 errors

I tried running ceph pg repair on the pg, but nothing changed. I also tried starting a new deep-scrub on the  osd 81 (ceph osd deep-scrub 81) but I don't see any deep-scrub starting at the osd.

How can we solve this ?

Thank you!


Kenneth

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux