This version of AF_LOCAL just looks like VSOCK and vhost-vsock, by another name. E.g., it apparently hard-wires VSOCK's host-guest communication restriction even more strongly. What are its intrinsic advantages? Matt On Fri, Sep 22, 2017 at 7:32 AM, Jeff Layton <jlayton@xxxxxxxxxx> wrote: > On Fri, 2017-09-22 at 10:55 +0100, Steven Whitehouse wrote: >> Hi, >> >> >> On 21/09/17 18:00, Stefan Hajnoczi wrote: >> > On Tue, Sep 19, 2017 at 01:24:52PM -0400, J. Bruce Fields wrote: >> > > On Tue, Sep 19, 2017 at 05:44:27PM +0100, Daniel P. Berrange wrote: >> > > > On Tue, Sep 19, 2017 at 11:48:10AM -0400, Chuck Lever wrote: >> > > > > > On Sep 19, 2017, at 11:10 AM, Daniel P. Berrange <berrange@xxxxxxxxxx> wrote: >> > > > > > VSOCK requires no guest configuration, it won't be broken accidentally >> > > > > > by NetworkManager (or equivalent), it won't be mistakenly blocked by >> > > > > > guest admin/OS adding "deny all" default firewall policy. Similar >> > > > > > applies on the host side, and since there's separation from IP networking, >> > > > > > there is no possibility of the guest ever getting a channel out to the >> > > > > > LAN, even if the host is mis-configurated. >> > > > > >> > > > > We don't seem to have configuration fragility problems with other >> > > > > deployments that scale horizontally. >> > > > > >> > > > > IMO you should focus on making IP reliable rather than trying to >> > > > > move familiar IP-based services to other network fabrics. >> > > > >> > > > I don't see that ever happening, except in a scenario where a single >> > > > org is in tight control of the whole stack (host & guest), which is >> > > > not the case for cloud in general - only some on-site clouds. >> > > >> > > Can you elaborate? >> > > >> > > I think we're having trouble understanding why you can't just say "don't >> > > do that" to someone whose guest configuration is interfering with the >> > > network interface they need for NFS. >> > >> > Dan can add more information on the OpenStack use case, but your >> > question is equally relevant to the other use case I mentioned - easy >> > file sharing between host and guest. >> > >> > Management tools like virt-manager (https://virt-manager.org/) should >> > support a "share directory with VM" feature. The user chooses a >> > directory on the host, a mount point inside the guest, and then clicks >> > OK. The directory should appear inside the guest. >> > >> > VMware, VirtualBox, etc have had file sharing for a long time. It's a >> > standard feature. >> > >> > Here is how to implement it using AF_VSOCK: >> > 1. Check presence of virtio-vsock device in VM or hotplug it. >> > 2. Export directory from host NFS server (nfs-ganesha, nfsd, etc). >> > 3. Send qemu-guest-agent command to (optionally) add /etc/fstab entry >> > and then mount. >> > >> > The user does not need to take any action inside the guest. >> > Non-technical users can share files without even knowing what NFS is. >> > >> > There are too many scenarios where guest administrator action is >> > required with NFS over TCP/IP. We can't tell them "don't do that" >> > because it makes this feature unreliable. >> > >> > Today we ask users to set up NFS or CIFS themselves. In many cases that >> > is inconvenient and an easy file sharing feature would be much better. >> > >> > Stefan >> > >> >> I don't think we should give up on making NFS easy to use with TCP/IP in >> such situations. With IPv6 we could have (for example) a device with a >> well known link-local address at the host end, and an automatically >> allocated link-local address at the guest end. In other words the same >> as VSOCK, but with IPv6 rather than VSOCK addresses. At that point the >> remainder of the NFS config steps would be identical to those you've >> outlined with VSOCK above. >> >> Creating a (virtual) network device which is restricted to host/guest >> communication and automatically configures itself should be a lot less >> work than adding a whole new protocol to NFS I think. It could also be >> used for many other use cases too, as well as giving the choice between >> NFS and CIFS. So it is much more flexible, and should be quicker to >> implement too, >> > > FWIW, I'm also intrigued by Chuck's AF_LOCAL proposition. What about > this idea: > > Make a filesystem (or a pair of filesystems) that could be mounted on > host and guest. Application running on host creates a unix socket in > there, and it shows up on the guest's filesystem. The sockets use a > virtio backend to shuffle data around. > > That seems like it could be very useful. > -- > Jeff Layton <jlayton@xxxxxxxxxx> -- Matt Benjamin Red Hat, Inc. 315 West Huron Street, Suite 140A Ann Arbor, Michigan 48103 http://www.redhat.com/en/technologies/storage tel. 734-821-5101 fax. 734-769-8938 cel. 734-216-5309 -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html