Re: CephFS & Project Manila (OpenStack)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> This is going to get horribly ugly when you add neutron into the mix, so
>> much so I'd consider this option a non-starter. If someone is using
>> openvswitch to create network overlays to isolate each tenant I can't
>> imagine this ever working.
>
> I'm not following here.  Are this only needed if ceph shares the same
> subnet as the VM?  I don't know much about how these things work, but I
> would expect that it would be possible to route IP traffic from a guest
> network to the storage network (or anywhere else, for that matter)...
>
> That aside, however, I think it would be a mistake to take the
> availability of cephfs vs nfs clients as a reason alone for a particular
> architectural approach.  One of the whole points of ceph is that we ignore
> legacy when it doesn't make sense.  (Hence rbd, not iscsi; cephfs, not
> [p]nfs.)

In an overlay world, physical VLANs have no relation to virtual
networks. An overlay is literally encapsulating layer 2 inside layer 3
and adding a VNI (virtual network identifier) and using tunnels
(VxLAN, STT, GRE, etc) to connect VMs on disparate hypervisors that
may not even have L2 connectivity to each other.  One of the core
tenants of virtual networks is providing tenants the ability to have
overlapping RFC1918 addressing, in this case you could have tenants
already utilizing the addresses used by the NFS storage at the
physical layer. Even if we could pretend that would never happen
(namespaces or jails maybe?) you would still need to provision a
distinct NFS IP per tenant and run a virtual switch that supports the
tunneling protocol used by the overlay and the southbound API used by
that overlays virtual switch to insert/remove flow information. The
only alternative to embedding a myriad of different virtual switch
protocols on the filer head would be to use a VTEP capable switch for
encapsulation. I think there are only 1-2 vendors that ship these,
Arista's 7150 and something in the Cumulus lineup.  Even if you could
get past all this I'm somewhat terrified by the proposition of
connecting the storage fabric to a tenant network, although this is
much more acute concern for public clouds.

Here's a good RFC wrt overlays if anyone is in dire need of bed time reading:

http://tools.ietf.org/html/draft-mity-nvo3-use-case-04

-- 

Kyle
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux