Hello, On Thu, 7 May 2015 00:34:58 +0200 Saverio Proto wrote: > Hello, > > I dont get it. You lost just 6 osds out of 145 and your cluster is not > able to recover ? > He lost 6 OSDs at the same time. With 145 OSDs and standard replication of 3 loosing 3 OSDs makes data loss already extremely likely, with 6 OSDs gone it is approaching certainty levels. Christian > what is the status of ceph -s ? > > Saverio > > > 2015-05-04 9:00 GMT+02:00 Yujian Peng <pengyujian5201314@xxxxxxx>: > > Hi, > > I'm encountering a data disaster. I have a ceph cluster with 145 osd. > > The data center had a power problem yesterday, and all of the ceph > > nodes were down. But now I find that 6 disks(xfs) in 4 nodes have data > > corruption. Some disks are unable to mount, and some disks have IO > > errors in syslog. mount: Structure needs cleaning > > xfs_log_forece: error 5 returned > > I tried to repair one with xfs_repair -L /dev/sdx1, but the ceph-osd > > reported a leveldb error: > > Error initializing leveldb: Corruption: checksum mismatch > > I cannot start the 6 osds and 22 pgs is down. > > This is really a tragedy for me. Can you give me some idea to recovery > > the xfs? Thanks very much! > > > > > > > > _______________________________________________ > > ceph-users mailing list > > ceph-users@xxxxxxxxxxxxxx > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Fusion Communications http://www.gol.com/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com