Re: [Manila] Ceph native driver for manila

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I like #4, but I'm not clear about the impact of the different
security models. Using "mapped" covers file-sharing inside the cloud,
but seems to have a gap for accessing the same share externally due to
the use of xattrs for permissions etc. Though perhaps the standard
kernel client could be made to recognise these?
#3 would be great, along with the necessary isolation bits in CephFS.
#2 seems acceptable assuming one could scale-out the Ganesha
CephFS-NFS servers, which avoids the problems of #1, which sucks. :-)

On 27 February 2015 at 16:19, Sage Weil <sweil@xxxxxxxxxx> wrote:
> On Fri, 27 Feb 2015, Haomai Wang wrote:
>> > Anyway, this leads to a few questions:
>> >
>> >  - Who is interested in using Manila to attach CephFS to guest VMs?
>>
>> Yeah, actually we are doing this
>> (https://www.openstack.org/vote-vancouver/Presentation/best-practice-of-ceph-in-public-cloud-cinder-manila-and-trove-all-with-one-ceph-storage).
>>
>> Hmm, the link seemed redirect and useless:-(
>>
>> >  - What use cases are you interested?
>>
>> We uses Manila + OpenStack for our NAS service
>>
>> >  - How important is security in your environment?
>>
>> Very important, we need to provide with qos, network isolation(private
>> network support).
>>
>> Now we use default Manila driver, attach a rbd image to service vm and
>> this service vm export NFS endpoint.
>>
>> Next as we showed in the presentation, we will use qemu driver to
>> directly passthrough filesystem command instead of block command. So
>> host can directly access cephfs safely and network isolation can be
>> ensured. It will make clearly for internal network(or storage network)
>> and virtual network.
>
> Is this using the qemu virtfs/9p server and 9p in the guest?  With a
> cephfs kernel mount on the host?  How reliable have you found it to be?
>
> That brings us to 4 options:
>
> 1) default driver: map rbd to manila VM, export NFS
> 2) ganesha driver: reexport cephfs as NFS
> 3) native ceph driver: let guest mount cephfs directly
> 4) mount cephfs on host, guest access via virtfs
>
> I think in all but #3 you get decent security isolation between tenants as
> long as you trust KVM and/or ganesha to enforce permissions.  In #3 we
> need to enforce that in CephFS (and have some work to do).
>
> I like #3 because it promises the best performance and shines the light on
> the multitenancy gaps we have now, and I have this feeling that
> multitenant security isn't a huge issue for a lot of users, but.. that's
> why I'm asking!
>
> sage
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Cheers,
~Blairo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux