On Fri, Jul 07, 2017 at 02:13:38PM +1000, NeilBrown wrote: > On Fri, Jul 07 2017, NeilBrown wrote: > > > On Fri, Jun 30 2017, Chuck Lever wrote: > >> > >> Wouldn't it be nicer if it worked like this: > >> > >> (guest)$ cat /etc/hosts > >> 129.0.0.2 localhyper > >> (guest)$ mount.nfs localhyper:/export /mnt > >> > >> And the result was a working NFS mount of the > >> local hypervisor, using whatever NFS version the > >> two both support, with no changes needed to the > >> NFS implementation or the understanding of the > >> system administrator? > > > > Yes. Yes. Definitely Yes. > > Though I suspect you mean "127.0.0.2", not "129..."?? > > > > There must be some way to redirect TCP connections to some address > > transparently through to the vsock protocol. > > The "sshuttle" program does this to transparently forward TCP connections > > over an ssh connection. Using a similar technique to forward > > connections over vsock shouldn't be hard. > > > > Or is performance really critical, and you get too much copying when you > > try forwarding connections? I suspect that is fixable, but it would be > > a little less straight forward. > > > > I would really *not* like to see vsock support being bolted into one > > network tool after another. > > I've been digging into this a big more. I came across > https://vmsplice.net/~stefan/stefanha-kvm-forum-2015.pdf > > which (on page 7) lists some reasons not to use TCP/IP between guest > and host. > > . Adding & configuring guest interfaces is invasive > > That is possibly true. But adding support for a new address family to > NFS, NFSD, and nfs-utils is also very invasive. You would need to > install this software on the guest. I suggest you install different > software on the guest which solves the problem better. Two different types of "invasive": 1. Requiring guest configuration changes that are likely to cause conflicts. 2. Requiring changes to the software stack. Once installed there are no conflicts. I'm interested and open to a different solution but it must avoid invasive configuration changes, especially inside the guest. > . Prone to break due to config changes inside guest > > This is, I suspect, a key issue. With vsock, the address of the > guest-side interface is defined by options passed to qemu. With > normal IP addressing, the guest has to configure the address. > > However I think that IPv6 autoconfig makes this work well without vsock. > If I create a bridge interface on the host, run > ip -6 addr add fe80::1 dev br0 > then run a guest with > -net nic,macaddr=Ch:oo:se:an:ad:dr \ > -net bridge,br=br0 \ > > then the client can > mount [fe80::1%interfacename]:/path /mountpoint > > and the host will see a connection from > fe80::ch:oo:se:an:ad:dr > > So from the guest side, I have achieved zero-config NFS mounts from the > host. It is not zero-configuration since [fe80::1%interfacename] contains a variable, "interfacename", whose value is unknown ahead of time. This will make documentation as well as ability to share configuration between VMs more difficult. In other words, we're back to something that requires per-guest configuration and doesn't just work everywhere. > I don't think the server can filter connections based on which interface > a link-local address came from. If that was a problem that someone > wanted to be fixed, I'm sure we can fix it. > > If you need to be sure that clients don't fake their IPv6 address, I'm > sure netfilter is up to the task. Yes, it's common to prevent spoofing on the host using netfilter and I think it wouldn't be a problem. > . Creates network interfaces on host that must be managed > > What vsock does is effectively create a hidden interface on the host that only the > kernel knows about and so the sysadmin cannot break it. The only > difference between this and an explicit interface on the host is that > the latter requires a competent sysadmin. > > If you have other reasons for preferring the use of vsock for NFS, I'd be > happy to hear them. So far I'm not convinced. Before working on AF_VSOCK I originally proposed adding dedicated network interfaces to guests, similar to what you've suggested, but there was resistance for additional reasons that weren't covered in the presentation: Using AF_INET exposes the host's network stack to guests, and through accidental misconfiguration even external traffic could reach the host's network stack. AF_VSOCK doesn't do routing or forwarding so we can be sure that any activity is intentional. Some virtualization use cases run guests without any network interfaces as a matter of security policy. One could argue that AF_VSOCK is just another network channel, but due to it's restricted usage, the attack surface is much smaller than an AF_INET network interface.
Attachment:
signature.asc
Description: PGP signature