On Wed, May 9, 2012 at 10:15 AM, Guido Winkelmann <guido-ceph@xxxxxxxxxxxxxxxxx> wrote: > I'm currently trying to re-enable my experimental ceph cluster that has been > offline for a few months. Unfortunately, it appears that, out of the six btrfs > volumes involved, only one can still be mounted, the other five are broken > somehow. (If I ever use Ceph in production, it's probably not going to be on > btrfs after this... I cannot recall whether or not the servers were properly > shut down the last time, but even if not, this is a bit ridiculous.) > > I cannot seem to repair the broken filesystem with btrfsck, but I can extract > data from them with btrfs-restore. OSD uses btrfs snapshots internally. Any restore operation would have to bring the snapshots back exactly as they were, too. It seems there's a -s option for that, but whether things will work out is hard to predict.. since it was a test cluster, perhaps you're better off scrapping the data and setting up a new cluster. -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html