Re: 1 pgs inconsistent 2 scrub errors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, we have replication size of 2 also

From what I understand, with a rep size of 2 the cluster can't decide which object is intact if one is broken, so the repair fails. If you had a size of 3, the cluster would see 2 intact objects an repair the broken one (I guess). At least we didn't have these inconsistencies since we increased the size to 3.


Zitat von Mio Vlahović <Mio.Vlahovic@xxxxxx>:

Hello,

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On
Behalf Of Eugen Block
I had a similar issue recently, where I had a replication size of 2 (I
changed that to 3 after the recovery).

Yes, we have replication size of 2 also...

ceph health detail
HEALTH_ERR 16 pgs inconsistent; 261 scrub errors
pg 1.bb1 is active+clean+inconsistent, acting [15,5]

[...CUT...]

So this object was completely missing. A ceph repair didn't work, I
wasn't sure why. So I just created the empty object:

ceph-node2:~ # touch
/var/lib/ceph/osd/ceph-
5/current/1.bb1_head/rbd\\udata.16be96558ea798.000000000000022f_
_head_D7879BB1__1

ceph-node2:~ # ceph pg repair 1.bb1

and the result in the logs:

[...] cluster [INF] 1.bb1 repair starts
[...] cluster [ERR] 1.bb1 shard 5: soid
1/d7879bb1/rbd_data.16be96558ea798.000000000000022f/head
data_digest
0xffffffff != best guess data_digest  0xead60f2d from auth shard 15,
size 0 != known size 6565888, missing attr _, missing attr snapset
[...] cluster [ERR] 1.bb1 repair 0 missing, 1 inconsistent objects
[...] cluster [ERR] 1.bb1 repair 1 errors, 1 fixed

I have tried your suggestion, now we have to wait and see the result, so far, we still have 1 pg inconsistent...nothing interesting in the logs regarding this pg.

Regards!



--
Eugen Block                             voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg                         e-mail  : eblock@xxxxxx

        Vorsitzende des Aufsichtsrates: Angelika Mozdzen
          Sitz und Registergericht: Hamburg, HRB 90934
                  Vorstand: Jens-U. Mozdzen
                   USt-IdNr. DE 814 013 983

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux