Re: [Manila] Ceph native driver for manila

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 27, 2015 at 1:19 PM, Sage Weil <sweil@xxxxxxxxxx> wrote:
> On Fri, 27 Feb 2015, Haomai Wang wrote:
>> > Anyway, this leads to a few questions:
>> >
>> >  - Who is interested in using Manila to attach CephFS to guest VMs?
>>
>> Yeah, actually we are doing this
>> (https://www.openstack.org/vote-vancouver/Presentation/best-practice-of-ceph-in-public-cloud-cinder-manila-and-trove-all-with-one-ceph-storage).
>>
>> Hmm, the link seemed redirect and useless:-(
>>
>> >  - What use cases are you interested?
>>
>> We uses Manila + OpenStack for our NAS service
>>
>> >  - How important is security in your environment?
>>
>> Very important, we need to provide with qos, network isolation(private
>> network support).
>>
>> Now we use default Manila driver, attach a rbd image to service vm and
>> this service vm export NFS endpoint.
>>
>> Next as we showed in the presentation, we will use qemu driver to
>> directly passthrough filesystem command instead of block command. So
>> host can directly access cephfs safely and network isolation can be
>> ensured. It will make clearly for internal network(or storage network)
>> and virtual network.
>
> Is this using the qemu virtfs/9p server and 9p in the guest?  With a
> cephfs kernel mount on the host?  How reliable have you found it to be?
>
> That brings us to 4 options:
>
> 1) default driver: map rbd to manila VM, export NFS
> 2) ganesha driver: reexport cephfs as NFS
> 3) native ceph driver: let guest mount cephfs directly
> 4) mount cephfs on host, guest access via virtfs
>
> I think in all but #3 you get decent security isolation between tenants as
> long as you trust KVM and/or ganesha to enforce permissions.  In #3 we
> need to enforce that in CephFS (and have some work to do).
>
> I like #3 because it promises the best performance and shines the light on
> the multitenancy gaps we have now, and I have this feeling that
> multitenant security isn't a huge issue for a lot of users, but.. that's
> why I'm asking!

FOR us, the #3's main problem is that guest vm need to access ceph cluster
via *virtual network*. With private network supported, in theory guest vm can't
access internal network especially storage network. Maybe we can make
manila service vm special and let it passthrough via NET or others. But it still
has network impaction which may influent the IO performance.

Actually, #4 may got better performance because it passthrough io via qemu
queue ring and #3 need to translate guest vm network bandwidth from virtual
network to internal network.

Of course, maybe other users don't need to consider this case :-)

>
> sage



-- 
Best Regards,

Wheat
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux