The trouble with using ZFS copies on top of RBD is that both copies of any particular block might end up on the same OSD. If you have disabled replication in Ceph, then this would mean a single OSD failure could cause data loss. For that reason, it seems it would be better to do the replication in Ceph than in ZFS in this case. John On Fri, Nov 29, 2013 at 11:13 AM, Charles 'Boyo <charlesboyo@xxxxxxxxx> wrote: > Hello all. > > I have a Ceph cluster using XFS on the OSDs. Btrfs is not available to > me at the moment (cluster is running CentOS 6.4 with stock kernel). > > I intend to maintain a full replica of an active ZFS dataset on the > Ceph infrastructure by installing an OpenSolaris KVM guest using > rbd-fuse to expose the rbd image to the guest. That's because qemu-kvm > in CentOS 6.4 doesn't support librbd. > > My question is: do I still need to maintain the rbd pool replica size > at 2 or is it enough that the ZFS filesystem contained in the zfs send > has "number of copies" set at 2 which together with internal > checksumming appears to be more protective of my data? > > Since ZFS is not in control of the underlying volume, I think RBD > should control the replica set but without copies, self-healing can't > happen at the ZFS level. Is Ceph scrubbing enough to forgo > checksumming and duplicates in ZFS? > > Having both means four time the storage requirement. :( > > Charles > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com