Re: ZFS on Ceph (rbd-fuse)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the input John.

So I should leave ZFS checksumming on, disable ZFS replicas and rely
on Ceph RBD replicas.
Is it even sane to use rbd-fuse for this?

On a related note, is there any discard/trim support in rbd-fuse? Else
I won't ever be able to thin-out the RBD image once it is allocated,
which will happen quickly with repeated zfs receives.

Charles

On Fri, Nov 29, 2013 at 12:50 PM, John Spray <john.spray@xxxxxxxxxxx> wrote:
> The trouble with using ZFS copies on top of RBD is that both copies of
> any particular block might end up on the same OSD.  If you have
> disabled replication in Ceph, then this would mean a single OSD
> failure could cause data loss.  For that reason, it seems it would be
> better to do the replication in Ceph than in ZFS in this case.
>
> John
>
> On Fri, Nov 29, 2013 at 11:13 AM, Charles 'Boyo <charlesboyo@xxxxxxxxx> wrote:
>> Hello all.
>>
>> I have a Ceph cluster using XFS on the OSDs. Btrfs is not available to
>> me at the moment (cluster is running CentOS 6.4 with stock kernel).
>>
>> I intend to maintain a full replica of an active ZFS dataset on the
>> Ceph infrastructure by installing an OpenSolaris KVM guest using
>> rbd-fuse to expose the rbd image to the guest. That's because qemu-kvm
>> in CentOS 6.4 doesn't support librbd.
>>
>> My question is: do I still need to maintain the rbd pool replica size
>> at 2 or is it enough that the ZFS filesystem contained in the zfs send
>> has "number of copies" set at 2 which together with internal
>> checksumming appears to be more protective of my data?
>>
>> Since ZFS is not in control of the underlying volume, I think RBD
>> should control the replica set but without copies, self-healing can't
>> happen at the ZFS level. Is Ceph scrubbing enough to forgo
>> checksumming and duplicates in ZFS?
>>
>> Having both means four time the storage requirement. :(
>>
>> Charles
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux