> On Jan 12, 2024, at 8:47 PM, Dan Shelton <dan.f.shelton@xxxxxxxxx> wrote: > > On Sat, 13 Jan 2024 at 02:32, Jeff Layton <jlayton@xxxxxxxxxx> wrote: >> >> On Sat, 2024-01-13 at 01:19 +0100, Dan Shelton wrote: >>> Hello! >>> >>> We've been experiencing significant nfsd performance problems with a >>> customer who has a deeply nested filesystem hierarchy, lots of >>> subdirs, some of them 60-80 dirs deep (!!), which leads to an >>> exponentially slowdown with nfsd accesses. >>> >>> Some of the issues have been addressed by implementing a better >>> directory walker via multiple dir fds and openat() (instead of just >>> cwd+open()), but the nfsd side still was a pretty dramatic issue, >>> until we bumped #define NFSD_MAX_OPS_PER_COMPOUND in >>> linux-6.7/fs/nfsd/nfsd.h from 50 to 96. After that the nfsd side >>> behaved MUCH more performant. >>> >> >> I guess your clients are trying to do a long pathwalk in a single >> COMPOUND? > > Likely. That's known bad client behavior, btw. It won't scale in the number of path components. >> Is this the windows client? > > No, clients are Solaris 11, Linux and freeBSD Solaris 11 is known to send COMPOUNDs that are too large during mount, but the rest of the time these three client implementations are not known to send large COMPOUNDs. >> At first glance, I don't see any real downside to increasing that value. >> Maybe we can bump it to 100 or so? What would probably be best is to >> propose a patch so we can discuss the change formally. > > OK. How does this work? Let's back up a minute. I'd like to see raw packet captures with the current MAX_OPS setting and the new larger one. Something is not adding up. -- Chuck Lever