Re: repair do not work for inconsistent pg which three replica are the same

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 1/10/19 8:36 AM, hnuzhoulin2 wrote:
> 
> Hi,cephers
> 
> I have two inconsistent pg.I try list inconsistent obj,got nothing.
> 
> rados list-inconsistent-obj 388.c29
> No scrub information available for pg 388.c29
> error 2: (2) No such file or directory
> 


Have you tried to run a deep-scrub on this PG and see what that does?

Wido

> so I search the log to find the obj name, and I search this name in
> three replica. Yes, three replica all the same(md5 is the same).
> error log is: 388.c29 shard 295: soid
> 388:9430fef2:::c2e226a9-b855-45c5-a17f-b1c697755072.1813469.4__multipart_dumbo%2f180888654%2f20181221%2fxtrabackup_full_x19_30044_20181221025000%2fx19.xbstream.2~ntwW9vwutbmOJ4bDZYehERT2AokbtAi.3595:head
> candidate had a readerror
> 
> obj name is:
> DIR_9/DIR_2/DIR_C/DIR_0/DIR_F/c2e226a9-b855-45c5-a17f-b1c697755072.1813469.4\\u\\umultipart\\udumbo\\s180888654\\s20181221\\sxtrabackup\\ufull\\ux19\\u30044\\u20181221025000\\sx19.xbstream.2~ntwW9vwutbmOJ4bDZYehERT2AokbtAi.3595__head_4F7F0C29__184
> all md5 is : 73281ed56c92a56da078b1ae52e888e0  
> 
> stat info is:
> root@cld-osd3-48:/home/ceph/var/lib/osd/ceph-33/current/388.c29_head#
> stat
> DIR_9/DIR_2/DIR_C/DIR_0/DIR_F/c2e226a9-b855-45c5-a17f-b1c697755072.1813469.4\\u\\umultipart\\udumbo\\s180888654\\s20181221\\sxtrabackup\\ufull\\ux19\\u30044\\u20181221025000\\sx19.xbstream.2~ntwW9vwutbmOJ4bDZYehERT2AokbtAi.3595__head_4F7F0C29__184
>   Size: 4194304   Blocks: 8200       IO Block: 4096   regular file
> Device: 891h/2193dInode: 4300403471  Links: 1
> Access: (0644/-rw-r--r--)  Uid: (  999/    ceph)   Gid: (  999/    ceph)
> Access: 2018-12-21 14:17:12.945132144 +0800
> Modify: 2018-12-21 14:17:12.965132073 +0800
> Change: 2018-12-21 14:17:13.761129235 +0800
>  Birth: -
> 
> root@cld-osd24-48:/home/ceph/var/lib/osd/ceph-279/current/388.c29_head#
> stat
> DIR_9/DIR_2/DIR_C/DIR_0/DIR_F/c2e226a9-b855-45c5-a17f-b1c697755072.1813469.4\\u\\umultipart\\udumbo\\s180888654\\s20181221\\sxtrabackup\\ufull\\ux19\\u30044\\u20181221025000\\sx19.xbstream.2~ntwW9vwutbmOJ4bDZYehERT2AokbtAi.3595__head_4F7F0C29__184
>   Size: 4194304   Blocks: 8200       IO Block: 4096   regular file
> Device: 831h/2097dInode: 8646464869  Links: 1
> Access: (0644/-rw-r--r--)  Uid: (  999/    ceph)   Gid: (  999/    ceph)
> Access: 2019-01-07 10:54:23.010293026 +0800
> Modify: 2019-01-07 10:54:23.010293026 +0800
> Change: 2019-01-07 10:54:23.014293004 +0800
>  Birth: -
> 
> root@cld-osd31-48:/home/ceph/var/lib/osd/ceph-363/current/388.c29_head#
> stat
> DIR_9/DIR_2/DIR_C/DIR_0/DIR_F/c2e226a9-b855-45c5-a17f-b1c697755072.1813469.4\\u\\umultipart\\udumbo\\s180888654\\s20181221\\sxtrabackup\\ufull\\ux19\\u30044\\u20181221025000\\sx19.xbstream.2~ntwW9vwutbmOJ4bDZYehERT2AokbtAi.3595__head_4F7F0C29__184
>   Size: 4194304   Blocks: 8200       IO Block: 4096   regular file
> Device: 831h/2097dInode: 13141445890  Links: 1
> Access: (0644/-rw-r--r--)  Uid: (  999/    ceph)   Gid: (  999/    ceph)
> Access: 2018-12-21 14:17:12.946862160 +0800
> Modify: 2018-12-21 14:17:12.966862262 +0800
> Change: 2018-12-21 14:17:13.762866312 +0800
>  Birth: -
> 
> 
> another pg os the same.I try run deep-scrub and repair. do not work.
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux