Hello, We have a test project where we are using ceph+openstack. Today we had some problems with this setup and we had to force reboot the server. After that, the partition where we keep the ceph journal could not mount. When we checked it, we got this: btrfsck /dev/mapper/vg_ssd-ceph_ssd Checking filesystem on /dev/mapper/vg_ssd-ceph_ssd UUID: 7121568d-3f6b-46b2-afaa-b2e543f31ba4 checking extents checking fs roots root 5 inode 257 errors 80 Segmentation fault Considering that we are using btrfs on ceph, could we format the journal and continue our work? Or will this kill our entire node? We don't care very much about the data from the last minutes before the crash. Best regards, Cristian Falcas -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140721/92279d07/attachment.htm>