> On Sep 20, 2017, at 9:16 AM, J. Bruce Fields <bfields@xxxxxxxxxxxx> wrote: > > On Tue, Sep 19, 2017 at 05:09:25PM -0400, Chuck Lever wrote: >> >>> On Sep 19, 2017, at 4:42 PM, J. Bruce Fields <bfields@xxxxxxxxxxxx> wrote: >>> >>> On Tue, Sep 19, 2017 at 03:56:50PM -0400, Chuck Lever wrote: >>>> A proof of concept is nice, but it isn't sufficient for merging >>>> NFS/VSOCK into upstream Linux. Unlike Ceph, NFS is an Internet >>>> standard. We can't introduce changes as we please and expect >>>> the rest of the world to follow us. >>>> >>>> I know the Ganesha folks chafe at this requirement, because >>>> standardization progress can sometimes be measured in geological >>>> time units. >>> >>> It doesn't need to be--I think we're only asking for a few pages here, >>> and nothing especially complicated (at the protocol level). >> >> That would define RPC over VSOCK. I would like to see a problem >> statement here, and we'd want to find a normative reference defining >> VSOCK addressing. Does one exist? >> >> My sense is that NFS on VSOCK would need more. The proof of concept >> I'm aware of drops a lot of functionality (for example, NFSv2/3 is >> excluded, and so is RPCSEC GSS and NFSv4.0 backchannel) to make NFS >> work on this transport. Interoperability would require that list >> be written somewhere. > > I don't think they need to support NFSv2/3 or RPCSEC_GSS, but it could > be worth a little text to explain why not, if they don't. Right, I'm not taking a position on whether or not those things should be supported. But the list of features that don't work on VSOCK mounts is substantial and needs to be documented, IMO. >> We also need to deal with filehandles and lock state during guest >> live migration. > > That sounds like a separate issue independent of transport. Yes, it is separate from transport specifics, but it's significant. > I've been assuming there's still some use to them in an implementation > that doesn't support migration at first. File handles suddenly change and lock state vanishes after a live migration event, both of which would be catastrophic for hypervisor mount points. This might be mitigated with some NFS protocol changes, but some implementation work is definitely required. This work hasn't been scoped, as far as I am aware. >From Daniel's description of their use cases, the guests leave the local hypervisor shares mounted long-term. You'd need to have some way to signal a guest to kill applications that are active on those mount points and unmount all hypervisor shares before a migration, and remount after the guest has landed on the new host. The NFSROOT use case would likely find this restriction to be onerous. You can't umount your root fs without an operational disruption like a reboot. Without guest migration support, the use of NFS/VSOCK would be for transient mount points only, not for NFSROOT nor for users in guests accessing their home directories. Otherwise, the guests in question could be nailed to a host: live guest migration is disabled for them so that long-term NFS/VSOCK mounts would be safe. > If not it's a bigger project. > --b. > >> >> That feels like more than a few pages to me. >> >> >>> That >>> shouldn't take so long. (Not to be published as an RFC, necessarily, >>> but to get far enough along that we can be pretty certain it won't need >>> incompatible changes.) >> >> >> -- >> Chuck Lever > -- > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- Chuck Lever -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html