Hello, I dont get it. You lost just 6 osds out of 145 and your cluster is not able to recover ? what is the status of ceph -s ? Saverio 2015-05-04 9:00 GMT+02:00 Yujian Peng <pengyujian5201314@xxxxxxx>: > Hi, > I'm encountering a data disaster. I have a ceph cluster with 145 osd. The > data center had a power problem yesterday, and all of the ceph nodes were down. > But now I find that 6 disks(xfs) in 4 nodes have data corruption. Some disks > are unable to mount, and some disks have IO errors in syslog. > mount: Structure needs cleaning > xfs_log_forece: error 5 returned > I tried to repair one with xfs_repair -L /dev/sdx1, but the ceph-osd > reported a leveldb error: > Error initializing leveldb: Corruption: checksum mismatch > I cannot start the 6 osds and 22 pgs is down. > This is really a tragedy for me. Can you give me some idea to recovery the > xfs? Thanks very much! > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com