Re: xfs corruption, data disaster!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



OK I see the problem. Thanks for explanation.
However he talks about 4 hosts. So with the default CRUSHMAP losing 1
or more OSDs on the same host is irrelevant.

The real problem he lost 4 OSDs on different hosts with pools of size
3 , so he lost the PGs that where mapped to 3 failing drives.

So he lost 22 pgs. But I guess the cluster has thousands of pgs so the
actual data lost is small. Is that correct ?

thanks

Saverio

2015-05-07 4:16 GMT+02:00 Christian Balzer <chibi@xxxxxxx>:
>
> Hello,
>
> On Thu, 7 May 2015 00:34:58 +0200 Saverio Proto wrote:
>
>> Hello,
>>
>> I dont get it. You lost just 6 osds out of 145 and your cluster is not
>> able to recover ?
>>
> He lost 6 OSDs at the same time.
> With 145 OSDs and standard replication of 3 loosing 3 OSDs makes data loss
> already extremely likely, with 6 OSDs gone it is approaching certainty
> levels.
>
> Christian
>> what is the status of ceph -s ?
>>
>> Saverio
>>
>>
>> 2015-05-04 9:00 GMT+02:00 Yujian Peng <pengyujian5201314@xxxxxxx>:
>> > Hi,
>> > I'm encountering a data disaster. I have a ceph cluster with 145 osd.
>> > The data center had a power problem yesterday, and all of the ceph
>> > nodes were down. But now I find that 6 disks(xfs) in 4 nodes have data
>> > corruption. Some disks are unable to mount, and some disks have IO
>> > errors in syslog. mount: Structure needs cleaning
>> >         xfs_log_forece: error 5 returned
>> > I tried to repair one with xfs_repair -L /dev/sdx1, but the ceph-osd
>> > reported a leveldb error:
>> >         Error initializing leveldb: Corruption: checksum mismatch
>> > I cannot start the 6 osds and 22 pgs is down.
>> > This is really a tragedy for me. Can you give me some idea to recovery
>> > the xfs? Thanks very much!
>> >
>> >
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> --
> Christian Balzer        Network/Systems Engineer
> chibi@xxxxxxx           Global OnLine Japan/Fusion Communications
> http://www.gol.com/
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux