On Wed, 2017-09-20 at 08:25 -0700, Frank Filz wrote: > > > On Sep 20, 2017, at 10:45 AM, J. Bruce Fields <bfields@fieldses.o > > > rg> > > wrote: > > > > > > On Wed, Sep 20, 2017 at 10:40:45AM -0400, Chuck Lever wrote: > > > > File handles suddenly change and lock state vanishes after a > > > > live > > > > migration event, both of which would be catastrophic for > > > > hypervisor > > > > mount points. > > > > > > They're talking about a Ganesha/Ceph backend. It should be able > > > to > > > preserve filehandles. > > > > That's only one possible implementation. I'm thinking in terms of > > what > > needs > > to be documented for interoperability purposes. > > It seems like live migration pretty much requires a back end that > will > preserve file handles. > > > > Lock migration will require server-side implementation work but > > > not > > > protocol changes that I'm aware of. > > > > > > It could be a lot of implementation work, though. > > > > Agreed. > > I think the lock migration can be handled the way we handle state > migration > in an HA environment - where we treat it as a server reboot to the > client > (so SM_NOTIFY to v3 clients, the various errors v4 uses to signal > server > reboot, in either case, the client will initiate lock reclaim). > Mind showing us an architecture for that? As far as I can see, the layering is as follows: VM client -------------- host knfsd -------------- host client -------------- Storage server So how would you notify the VM client that its locks have been migrated? -- Trond Myklebust Linux NFS client maintainer, PrimaryData trond.myklebust@xxxxxxxxxxxxxxx ��.n��������+%������w��{.n�����{��w���jg��������ݢj����G�������j:+v���w�m������w�������h�����٥