Re: Should we establish a new nfsdctl userland program?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 26 Jan 2024, Jeff Layton wrote:
> The existing rpc.nfsd program was designed during a different time, when
> we just didn't require that much control over how it behaved. It's
> klunky to work with.

How is it clunky?

  rpc.nfsd

that starts the service.

  rpc.nfsd 0

that stops the service.

Ok, not completely elegant.  Maybe

  nfsdctl start
  nfsdctl stop

would be better.

> 
> In a response to Chuck's recent RFC patch to add knob to disable
> READ_PLUS calls, I mentioned that it might be a good time to make a
> clean break from the past and start a new program for controlling nfsd.
> 
> Here's what I'm thinking:
> 
> Let's build a swiss-army-knife kind of interface like git or virsh:
> 
> # nfsdctl stats			<--- fetch the new stats that got merged
> # nfsdctl add_listener		<--- add a new listen socket, by address or hostname
> # nfsdctl set v3 on		<--- enable NFSv3
> # nfsdctl set splice_read off	<--- disable splice reads (per Chuck's recent patch)
> # nfsdctl set threads 128	<--- spin up the threads

Sure the "git" style would use

   nfsdctl version 3 on
   nfsdctl threads 128

Apart from "stats", "start", "stop", I suspect that we developers would
be the only people to actually use this functionality.  Until now, 
  echo > /proc/sys/nfsd/foo
has been enough for most tweeking.  Having a proper tool would likely
lower the barrier to entry, which can only be a good thing.

> 
> We could start with just the bare minimum for now (the stats interface),
> and then expand on it. Once we're at feature parity with rpc.nfsd, we'd
> want to have systemd preferentially use nfsdctl instead of rpc.nfsd to
> start and stop the server. systemd will also need to fall back to using
> rpc.nfsd if nfsdctl or the netlink program isn't present.

systemd doesn't need a fallback.  Systemd always activates
nfs-server.service.  We just need to make sure the installed
nfs-server.service matches the installed tools, and as they are
distributed as parts of the same package, that should be trivial.

> 
> Note that I think this program will have to be a compiled binary vs. a
> python script or the like, given that it'll be involved in system
> startup.

Agreed.

> 
> It turns out that Lorenzo already has a C program that has a lot of the
> plumbing we'd need:
> 
>     https://github.com/LorenzoBianconi/nfsd-netlink
> 
> I think it might be good to clean up the interface a bit, build a
> manpage and merge that into nfs-utils.
> 
> Questions:
> 
> 1/ one big binary, or smaller nfsdctl-* programs (like git uses)?

/usr/lib/git-core (on my laptop) has 168 entries.  Only 29 of them are
NOT symlinks to 'git'.

While I do like the "tool command args" interface, and I like the option
of adding commands by simply creating drop-in tools, I think that core
functionality should go in the core tool.
So: "one big binary" please - with call-out functionality if anyone can
be bothered implementing it.

> 
> 2/ should it automagically read in nfs.conf? (I tend to think it should,
> but we might want an option to disable that)

Absolutely definitely.  I'm not convinced we need an option to disable
config, but allowing options to over-ride specific configs is sensible.

Most uses of this tool would come from nfs-server.service which would
presumably call
   nfsdctl start
which would set everything based on the nfs.conf and thus start the
server.  And
   nfsdctl stop
which would set the number of threads to zero.

> 
> 3/ should "set threads" activate the server, or just set a count, and
> then we do a separate activation step to start it? If we want that, then
> we may want to twiddle the proposed netlink interface a bit.

It might be sensible to have "set max-threads" which doesn't actually
start the service.
I would really REALLY like a dynamic thread pool.  It would start at 1
(or maybe 2) and grow on demand up to the max, and idle threads
(inactive for 30 seconds?) would exit.  We could then default the max to
some function of memory size and people could mostly ignore the
num-threads setting.

I don't have patches today, but if we are re-doing the interfaces I
would like us to plan the interfaces to support a pool rather than a
fixed number.

> 
> I'm sure other questions will arise as we embark on this too.
> 
> Thoughts? Anyone have objections to this idea?

I think this is an excellent question to ask.  As you say it is a long
time since rpc.nfsd was created, and it has grown incrementally rather
then being clearly designed.

> -- 
> Jeff Layton <jlayton@xxxxxxxxxx>
> 

Thanks,
NeilBrown





[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux