The existing rpc.nfsd program was designed during a different time, when we just didn't require that much control over how it behaved. It's klunky to work with. In a response to Chuck's recent RFC patch to add knob to disable READ_PLUS calls, I mentioned that it might be a good time to make a clean break from the past and start a new program for controlling nfsd. Here's what I'm thinking: Let's build a swiss-army-knife kind of interface like git or virsh: # nfsdctl stats <--- fetch the new stats that got merged # nfsdctl add_listener <--- add a new listen socket, by address or hostname # nfsdctl set v3 on <--- enable NFSv3 # nfsdctl set splice_read off <--- disable splice reads (per Chuck's recent patch) # nfsdctl set threads 128 <--- spin up the threads We could start with just the bare minimum for now (the stats interface), and then expand on it. Once we're at feature parity with rpc.nfsd, we'd want to have systemd preferentially use nfsdctl instead of rpc.nfsd to start and stop the server. systemd will also need to fall back to using rpc.nfsd if nfsdctl or the netlink program isn't present. Note that I think this program will have to be a compiled binary vs. a python script or the like, given that it'll be involved in system startup. It turns out that Lorenzo already has a C program that has a lot of the plumbing we'd need: https://github.com/LorenzoBianconi/nfsd-netlink I think it might be good to clean up the interface a bit, build a manpage and merge that into nfs-utils. Questions: 1/ one big binary, or smaller nfsdctl-* programs (like git uses)? 2/ should it automagically read in nfs.conf? (I tend to think it should, but we might want an option to disable that) 3/ should "set threads" activate the server, or just set a count, and then we do a separate activation step to start it? If we want that, then we may want to twiddle the proposed netlink interface a bit. I'm sure other questions will arise as we embark on this too. Thoughts? Anyone have objections to this idea? -- Jeff Layton <jlayton@xxxxxxxxxx>