Re: 1 pgs inconsistent 2 scrub errors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On
> Behalf Of Eugen Block 
> I had a similar issue recently, where I had a replication size of 2 (I
> changed that to 3 after the recovery).

Yes, we have replication size of 2 also...

> ceph health detail
> HEALTH_ERR 16 pgs inconsistent; 261 scrub errors
> pg 1.bb1 is active+clean+inconsistent, acting [15,5]
> 
> [...CUT...] 
>
> So this object was completely missing. A ceph repair didn't work, I
> wasn't sure why. So I just created the empty object:
> 
> ceph-node2:~ # touch
> /var/lib/ceph/osd/ceph-
> 5/current/1.bb1_head/rbd\\udata.16be96558ea798.000000000000022f_
> _head_D7879BB1__1
> 
> ceph-node2:~ # ceph pg repair 1.bb1
> 
> and the result in the logs:
> 
> [...] cluster [INF] 1.bb1 repair starts
> [...] cluster [ERR] 1.bb1 shard 5: soid
> 1/d7879bb1/rbd_data.16be96558ea798.000000000000022f/head
> data_digest
> 0xffffffff != best guess data_digest  0xead60f2d from auth shard 15,
> size 0 != known size 6565888, missing attr _, missing attr snapset
> [...] cluster [ERR] 1.bb1 repair 0 missing, 1 inconsistent objects
> [...] cluster [ERR] 1.bb1 repair 1 errors, 1 fixed

I have tried your suggestion, now we have to wait and see the result, so far, we still have 1 pg inconsistent...nothing interesting in the logs regarding this pg.

Regards!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux