Can I use btrfs-restore to restore ceph osds?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm currently trying to re-enable my experimental ceph cluster that has been 
offline for a few months. Unfortunately, it appears that, out of the six btrfs 
volumes involved, only one can still be mounted, the other five are broken 
somehow. (If I ever use Ceph in production, it's probably not going to be on 
btrfs after this... I cannot recall whether or not the servers were properly 
shut down the last time, but even if not, this is a bit ridiculous.)

I cannot seem to repair the broken filesystem with btrfsck, but I can extract 
data from them with btrfs-restore.

If I want to restore this whole thing, can I just run btrfs-restore to recover 
the files that were on the broken volume, then make a new filesystem on the 
old partition, and finally copy over the directories and file restored by 
btrfs-restore? Or will that lose important information?

	Guido
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux