recover ceph journal disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Monday, July 21, 2014, Cristian Falcas <cristi.falcas at gmail.com> wrote:

> Hello,
>
> We have a test project where we are using ceph+openstack.
>
> Today we had some problems with this setup and we had to force reboot the
> server. After that, the partition where we keep the ceph journal could not
> mount.
>
> When we checked it, we got this:
>
> btrfsck /dev/mapper/vg_ssd-ceph_ssd
> Checking filesystem on /dev/mapper/vg_ssd-ceph_ssd
> UUID: 7121568d-3f6b-46b2-afaa-b2e543f31ba4
> checking extents
> checking fs roots
> root 5 inode 257 errors 80
> Segmentation fault
>
>
> Considering that we are using btrfs on ceph, could we format the journal
> and continue our work? Or will this kill our entire node? We don't care
> very much about the data from the last minutes before the crash.
>
> Best regards,
> Cristian Falcas
>

Usually this is very unsafe, but with btrfs it should be fine (it takes
periodic snapshots and will roll back to the latest one to get a consistent
view). You can find help on reformatting the journals in the doc or help
text. :)
-Greg

-- 
Software Engineer #42 @ http://inktank.com | http://ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140721/ce5b024e/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux