On Mon, 2018-12-10 at 14:53 -0500, J. Bruce Fields wrote: > On Mon, Dec 10, 2018 at 02:23:10PM -0500, J. Bruce Fields wrote: > > On Mon, Dec 10, 2018 at 01:12:31PM -0500, Jeff Layton wrote: > > > On Mon, 2018-12-10 at 12:47 -0500, J. Bruce Fields wrote: > > > > We've got a long-standing complaint that tools like lsof, when run on an > > > > NFS server, overlook opens and locks held by NFS clients. > > > > > > > > The information's all there, it's just a question of how to expose it. > > > > > > > > Easiest might be a single flat file like /proc/locks, but I've always > > > > hoped we could do something slightly more structured, using a > > > > subdirectory per NFS client. > > > > > > > > Jeff Layton looked into this several years ago. I don't remember if > > > > there was some particular issue or if he just got bogged down in VFS > > > > details. > > > > > > > > > > I think I had a patch that generated a single flat file for locks, but > > > you wanted to present a directory or file per-client, and I just never > > > got around to reworking the earlier patch. > > > > Oh, OK, makes sense. > > (But, um, if anyone has a good starting point to recommend to me here, > I'm interested. E.g. another pseudofs that's a good example to follow.) > I looked for the branch, but I can't find it now. It may be possible to find my original posting of it on the mailing list, but it has been years. I'm pretty sure it'd be badly bitrotted by now anyway. Where do you intend for this to live? Do you plan to build a new hierarchy under /proc/fs/nfsd, or use something like sysfs or debugfs? > I also had some idea that we might eventually also benefit from some > two-way communication. But the only idea I had there was some sort of > "destroy this client now" operation, which is probably less important > for NFSv4 state, since it gets cleaned up automatically on lease expiry. > Per client cancellation sounds like a nice feature. The fault injection code had some (less granular) stuff for killing off live clients. It may be worth going over that. -- Jeff Layton <jlayton@xxxxxxxxxx>