On Tue, Apr 23, 2024 at 05:12:56PM +0200, Petr Vorel wrote: > > On Mon, Apr 22, 2024 at 12:09:19PM +1000, NeilBrown wrote: > > > The calculation of how many clients the nfs server can manage is only an > > > heuristic. Triggering the laundromat to clean up old clients when we > > > have more than the heuristic limit is valid, but refusing to create new > > > clients is not. Client creation should only fail if there really isn't > > > enough memory available. > > > > This is not known to have caused a problem is production use, but > > > testing of lots of clients reports an error and it is not clear that > > > this error is justified. > > > It is justified, see 4271c2c08875 ("NFSD: limit the number of v4 > > clients to 1024 per 1GB of system memory"). In cases like these, > > the recourse is to add more memory to the test system. > > FYI the system is using 1468 MB + 2048 MB swap > > $ free -m > total used free shared buff/cache available > Mem: 1468 347 589 4 686 1121 > Swap: 2048 0 2048 > > Indeed increasing the memory to 3430 MB makes test happy. It's of course up to > you to see whether this is just unrealistic / artificial problem which does not > influence users and thus is v2 Neil sent is not worth of merging. IMO, if you want to handle a large client cohort, NFSD will need to have adequate memory available. In production scenarios, I think it is not realistic to expect a 1.5GB server to handle more than a few dozen NFSv4 clients, given the amount of lease, session, and open/lock state that can be in flight. However, in testing scenarios, it's reasonable and even necessary to experiment with low-memory servers. I don't disagree that failing the mount attempt outright is a good thing to do. But to make that fly, we need to figure out how to make NFSv4.1+ behave that way too. -- Chuck Lever