Re: Ceph fs stability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/02/2012 10:10 AM, Niko! wrote:
Hi!

we are using ceph 0.48 on three nodes to provide rbd images for further
four kvm nodes (not kernel mapped) with no big issues and would mount
the ceph fs on the kvm nodes just to store xml virtual machine
definitions in order to have them immediately in the case an host
crashes (we are in multimds configuration). As ceph fs is not production
ready what are the possible problems? May the fs corrupt the rbd pool or
damages will be limited only to the data/metadata pools (acceptable for
us)? may the fs hang the entire cluster or kvm client nodes?

Regards.

     Niko

The fs won't corrupt other pools or make them inaccessible, but if you
use the kernel client it could potentially lock up the node. If you
really want the fs interface, you can limit the failure further by
mounting with fuse instead of the kernel ceph module.

Since you're just storing some some vm definitions, why not use rados
objects directly, e.g. with rados get/put to retrieve and store them?

Josh

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux