On Wed, 2017-09-20 at 14:34 -0400, bfields@xxxxxxxxxxxx wrote: > On Wed, Sep 20, 2017 at 06:17:07PM +0000, Trond Myklebust wrote: > > On Wed, 2017-09-20 at 08:25 -0700, Frank Filz wrote: > > > > > On Sep 20, 2017, at 10:45 AM, J. Bruce Fields <bfields@fields > > > > > es.o > > > > > rg> > > > > > > wrote: > > > > > > > > > > On Wed, Sep 20, 2017 at 10:40:45AM -0400, Chuck Lever wrote: > > > > > > File handles suddenly change and lock state vanishes after > > > > > > a > > > > > > live > > > > > > migration event, both of which would be catastrophic for > > > > > > hypervisor > > > > > > mount points. > > > > > > > > > > They're talking about a Ganesha/Ceph backend. It should be > > > > > able > > > > > to > > > > > preserve filehandles. > > > > > > > > That's only one possible implementation. I'm thinking in terms > > > > of > > > > what > > > > > > needs > > > > to be documented for interoperability purposes. > > > > > > It seems like live migration pretty much requires a back end that > > > will > > > preserve file handles. > > > > > > > > Lock migration will require server-side implementation work > > > > > but > > > > > not > > > > > protocol changes that I'm aware of. > > > > > > > > > > It could be a lot of implementation work, though. > > > > > > > > Agreed. > > > > > > I think the lock migration can be handled the way we handle state > > > migration > > > in an HA environment - where we treat it as a server reboot to > > > the > > > client > > > (so SM_NOTIFY to v3 clients, the various errors v4 uses to signal > > > server > > > reboot, in either case, the client will initiate lock reclaim). > > > > > > > Mind showing us an architecture for that? As far as I can see, the > > layering is as follows: > > > > VM client > > -------------- > > host knfsd > > -------------- > > host client > > -------------- > > Storage server > > > > So how would you notify the VM client that its locks have been > > migrated? > > All I've seen mentioned in this thread is > > VM client > --------- > host Ganesha > --------- > Ceph or Gluster > > Did I misunderstand? > > NFS proxying would certainly make it all more entertaining. > Pretty sure I've mentioned it before in these VSOCK threads. I personally see that as a lot more interesting than re-exporting ceph and glustre... -- Trond Myklebust Linux NFS client maintainer, PrimaryData trond.myklebust@xxxxxxxxxxxxxxx ��.n��������+%������w��{.n�����{��w���jg��������ݢj����G�������j:+v���w�m������w�������h�����٥