Re: Unmountable btrfs filesystems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/16/12 1:46 PM, Wido den Hollander wrote:
Hi,

On my dev cluster (10 nodes, 40 OSD's) I'm still trying to run Ceph on
btrfs, but over the last couple of months I've lost multiple OSD's due
to btrfs.

On my nodes I've set kernel.panic=60 so that whenever a kernel panic
occurs I get the node back within two minutes.

Now, over the last time I've seen multiple nodes reboot (didn't see the
strace), but afterwards the btrfs filesystems on that node were
unmountable.

"btrfs: open_ctree failed"

I tried various kernels, the most recent 3.3.0 from kernel.ubuntu.com,
but I'm still seeing this.

Is anyone seeing the same or did everybody migrate away to ext4 or XFS?

I still prefer btrfs due to the snapshotting, but loosing all these
OSD's all the time is getting kind of frustrating.

Any thoughts or comments?

Wido
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html

Hi Wido,

btrfsck might tell you what's wrong. Sounds like there is a btrfs-restore command in the dangerdonteveruse branch you could try. Beyond that, I guess it just really comes down to tradeoffs.

Good luck! ;)

Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux