ZFS on Ceph (rbd-fuse)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all.

I have a Ceph cluster using XFS on the OSDs. Btrfs is not available to
me at the moment (cluster is running CentOS 6.4 with stock kernel).

I intend to maintain a full replica of an active ZFS dataset on the
Ceph infrastructure by installing an OpenSolaris KVM guest using
rbd-fuse to expose the rbd image to the guest. That's because qemu-kvm
in CentOS 6.4 doesn't support librbd.

My question is: do I still need to maintain the rbd pool replica size
at 2 or is it enough that the ZFS filesystem contained in the zfs send
has "number of copies" set at 2 which together with internal
checksumming appears to be more protective of my data?

Since ZFS is not in control of the underlying volume, I think RBD
should control the replica set but without copies, self-healing can't
happen at the ZFS level. Is Ceph scrubbing enough to forgo
checksumming and duplicates in ZFS?

Having both means four time the storage requirement. :(

Charles
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux