recover ceph journal disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

We have a test project where we are using ceph+openstack.

Today we had some problems with this setup and we had to force reboot the
server. After that, the partition where we keep the ceph journal could not
mount.

When we checked it, we got this:

btrfsck /dev/mapper/vg_ssd-ceph_ssd
Checking filesystem on /dev/mapper/vg_ssd-ceph_ssd
UUID: 7121568d-3f6b-46b2-afaa-b2e543f31ba4
checking extents
checking fs roots
root 5 inode 257 errors 80
Segmentation fault


Considering that we are using btrfs on ceph, could we format the journal
and continue our work? Or will this kill our entire node? We don't care
very much about the data from the last minutes before the crash.

Best regards,
Cristian Falcas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140721/92279d07/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux