Option 1) The service plugs your filesystem's IP into the VM's network
and provides direct IP access. For a shared box (like an NFS server)
this is fairly straightforward and works well (*everything* has a
working NFS client). It's more troublesome for CephFS, since we'd need
to include access to many hosts, lots of operating systems don't
include good CephFS clients by default, and the client is capable of
forcing some service disruptions if they misbehave or disappear (most
likely via lease timeouts), but it may not be impossible.
This is going to get horribly ugly when you add neutron into the mix, so much so I'd consider this option a non-starter. If someone is using openvswitch to create network overlays to isolate each tenant I can't imagine this ever working.
Option 2) The hypervisor mediates access to the FS via some
pass-through filesystem (presumably P9 — Plan 9 FS, which QEMU/KVM is
already prepared to work with). This works better for us; the
hypervisor host can have a single CephFS mount that it shares
selectively to client VMs or something.
This seems like the only sane way to do it IMO.
Option 3) An agent communicates with the client via a well-understood
protocol (probably NFS) on their VLAN, and to the the backing
filesystem on a different VLAN in the native protocol. This would also
work for CephFS, but of course having to use a gateway agent (either
on a per-tenant or per-many-tenants basis) is a bit of a bummer in
terms of latency, etc.
Again, this still tricky with neutron and network overlays. You would need one agent per tenant network and encapsulate the agents traffic using with openvswitch (STT/VxLAN/etc) or a physical switch (only VxLAN is supported in silicon).
Kyle
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com