On Wed, 13 Nov 2024, Daire Byrne wrote: > Neil, > > I'm curious if this work relates to: > > https://bugzilla.linux-nfs.org/show_bug.cgi?id=375 > https://lore.kernel.org/all/CAPt2mGMZh9=Vwcqjh0J4XoTu3stOnKwswdzApL4wCA_usOFV_g@xxxxxxxxxxxxxx Yes, it could possibly help with that - though more work would be needed. nfsd currently has a hard limit of 160 slots per session. That wouldn't be enough I suspect. The Linux client has a hard limit of 1024. That might be enough. Allowed nfsd to have 1024 (or more) shouldn't be too hard... > > As my thread described, we currently use NFSv3 for our high latency > NFS re-export cases simply because it performs way better for parallel > client operations. You see, when you use re-exporting serving many > clients, you are in effect taking all those client operations and > stuffing them through a single client (the re-export server) which > then becomes a bottleneck. Add any kind of latency on top (>10ms) and > the NFSD_CACHE_SIZE_SLOTS_PER_SESSION (32) for NFSv4 becomes a major > bottleneck for a single client (re-export server). > > We also used your "VFS: support parallel updates in the one directory" > patches for similar reasons up until I couldn't port it to newer > kernels anymore (my kernel code munging skills are not sufficient!). Yeah - I really should get back to that. Al and Linus suggested some changes and I just never got around to making them. Thanks, NeilBrown > > Sorry to spam the thread if I am misinterpreting what this patch set > is all about. > > Daire > > > On Wed, 13 Nov 2024 at 05:54, NeilBrown <neilb@xxxxxxx> wrote: > > > > This patch set aims to allocate session-based DRC slots on demand, and > > free them when not in use, or when memory is tight. > > > > I've tested with NFSD_MAX_UNUSED_SLOTS set to 1 so that freeing is > > overly agreesive, and with lots of printks, and it seems to do the right > > thing, though memory pressure has never freed anything - I think you > > need several clients with a non-trivial number of slots allocated before > > the thresholds in the shrinker code will trigger any freeing. > > > > I haven't made use of the CB_RECALL_SLOT callback. I'm not sure how > > useful that is. There are certainly cases where simply setting the > > target in a SEQUENCE reply might not be enough, but I doubt they are > > very common. You would need a session to be completely idle, with the > > last request received on it indicating that lots of slots were still in > > use. > > > > Currently we allocate slots one at a time when the last available slot > > was used by the client, and only if a NOWAIT allocation can succeed. It > > is possible that this isn't quite agreesive enough. When performing a > > lot of writeback it can be useful to have lots of slots, but memory > > pressure is also likely to build up on the server so GFP_NOWAIT is likely > > to fail. Maybe occasionally using a firmer request (outside the > > spinlock) would be justified. > > > > We free slots when the number of unused slots passes some threshold - > > currently 6 (because ... why not). Possible a hysteresis should be > > added so we don't free unused slots for a least N seconds. > > > > When the shrinker wants to apply presure we remove slots equally from > > all sessions. Maybe there should be some proportionality but that would > > be more complex and I'm not sure it would gain much. Slot 0 can never > > be freed of course. > > > > I'm very interested to see what people think of the over-all approach, > > and of the specifics of the code. > > > > Thanks, > > NeilBrown > > > > > > [PATCH 1/4] nfsd: remove artificial limits on the session-based DRC > > [PATCH 2/4] nfsd: allocate new session-based DRC slots on demand. > > [PATCH 3/4] nfsd: free unused session-DRC slots > > [PATCH 4/4] nfsd: add shrinker to reduce number of slots allocated > > >