Re: kernel.org list issues... / was: Fwd: Turn NFSD_MAX_* into tuneables ? / was: Re: Increasing NFSD_MAX_OPS_PER_COMPOUND to 96

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Jan 13, 2024, at 4:14 PM, Jeff Layton <jlayton@xxxxxxxxxx> wrote:
> 
> On Sat, 2024-01-13 at 16:10 +0000, Chuck Lever III wrote:
>> 
>>> On Jan 13, 2024, at 10:09 AM, Jeff Layton <jlayton@xxxxxxxxxx> wrote:
>> 
>>>> Solaris 11 is known to send COMPOUNDs that are too large
>>>> during mount, but the rest of the time these three client
>>>> implementations are not known to send large COMPOUNDs.
>>> Actually the FreeBSD client is the same as Solaris, in that it does the
>>> entire mount path in one compound. If you were to attempt a mount
>>> with more than 48 components, it would exceed 50 ops in the compound.
>>> I don't think it can exceed 50 ops any other way.
>> 
>> I'd like to see the raw packet captures to confirm that our
>> speculation about the problem is indeed correct. Since this
>> limit is hit only when mounting (and not at all by Linux
>> clients), I don't yet see how that would "make NFSD slow".
> 
> It seems quite plausible that keeping the max low causes the client to
> have to do a deep pathwalk using multiple RPCs instead of one. That
> seems like it could have performance implications.

That's a lot of "mights" and "coulds." Not saying you're
wrong, but this needs some evidentiary backup.

No one has yet demonstrated how this limit directly impacts
perceived NFS server performance in this case. There has
been only speculation about what the clients are doing and
how much breaking up round trips would slow them down.

Again, if path walking is happening only at mount time, I
don't understand why it would have /any/ workload
performance impact to do it in multiple steps versus one.
Do you? A lengthy path walk during mount is not in the
performance path.

That's my main concern, and it's specific to this problem
report. It's not a concern about the actual value of NFSD's
max-ops, large or small.

Let's see packet captures and performance numbers before
making code changes, please? I don't think that's an
unreasonable request. My guess is there is some (bogus)
error handling logic that is gumming up the works that is
side-stepped when max-ops is large enough to handle these
requests in one COMPOUND.


> I don't really see the value in limiting the number of
> ops per compound. Are we really any better off having the client break
> those up into multiple round trips?

Yes, clients are better off handling this properly.


> Why?


Clients don't have any control over the max-ops limit that
a server places on a session. They really cannot depend on
it being large.

In fact, servers that are resource-constrained are
permitted to reduce max-ops and the maximum session slot
count (CB_RECALL_SLOT is one mechanism to do this). That
is totally expected and valid server behavior. (No, NFSD
does not do this currently, but the protocol allows it).

Fix the clients once, and they will be able to handle all
these scenarios transparently and efficiently against any
server, old or new.


--
Chuck Lever






[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux