On Fri, 15 Nov 2024 at 04:56, NeilBrown <neilb@xxxxxxx> wrote: > > On Wed, 13 Nov 2024, Daire Byrne wrote: > > Neil, > > > > I'm curious if this work relates to: > > > > https://bugzilla.linux-nfs.org/show_bug.cgi?id=375 > > https://lore.kernel.org/all/CAPt2mGMZh9=Vwcqjh0J4XoTu3stOnKwswdzApL4wCA_usOFV_g@xxxxxxxxxxxxxx > > Yes, it could possibly help with that - though more work would be > needed. > nfsd currently has a hard limit of 160 slots per session. That wouldn't > be enough I suspect. The Linux client has a hard limit of 1024. That > might be enough. > Allowed nfsd to have 1024 (or more) shouldn't be too hard... That would be cool - I'd love to be able to switch out NFSv3 for NFSv4 for our use cases. Although saying that, any changes to nfsd to support that would likely take many years to make it into the RHEL based storage servers we currently use. Our re-export case is pretty unique and niche in the sense that a single client is essentially trying to do the work of many clients. > > We also used your "VFS: support parallel updates in the one directory" > > patches for similar reasons up until I couldn't port it to newer > > kernels anymore (my kernel code munging skills are not sufficient!). > > Yeah - I really should get back to that. Al and Linus suggested some > changes and I just never got around to making them. That would also be awesome. Again, our specific niche use case (many clients writing to the same directory via a re-export) is probably the main beneficiary, but maybe it helps with other (more common) workloads too. Cheers, Daire > Thanks, > NeilBrown > > > > > > Sorry to spam the thread if I am misinterpreting what this patch set > > is all about. > > > > Daire > > > > > > On Wed, 13 Nov 2024 at 05:54, NeilBrown <neilb@xxxxxxx> wrote: > > > > > > This patch set aims to allocate session-based DRC slots on demand, and > > > free them when not in use, or when memory is tight. > > > > > > I've tested with NFSD_MAX_UNUSED_SLOTS set to 1 so that freeing is > > > overly agreesive, and with lots of printks, and it seems to do the right > > > thing, though memory pressure has never freed anything - I think you > > > need several clients with a non-trivial number of slots allocated before > > > the thresholds in the shrinker code will trigger any freeing. > > > > > > I haven't made use of the CB_RECALL_SLOT callback. I'm not sure how > > > useful that is. There are certainly cases where simply setting the > > > target in a SEQUENCE reply might not be enough, but I doubt they are > > > very common. You would need a session to be completely idle, with the > > > last request received on it indicating that lots of slots were still in > > > use. > > > > > > Currently we allocate slots one at a time when the last available slot > > > was used by the client, and only if a NOWAIT allocation can succeed. It > > > is possible that this isn't quite agreesive enough. When performing a > > > lot of writeback it can be useful to have lots of slots, but memory > > > pressure is also likely to build up on the server so GFP_NOWAIT is likely > > > to fail. Maybe occasionally using a firmer request (outside the > > > spinlock) would be justified. > > > > > > We free slots when the number of unused slots passes some threshold - > > > currently 6 (because ... why not). Possible a hysteresis should be > > > added so we don't free unused slots for a least N seconds. > > > > > > When the shrinker wants to apply presure we remove slots equally from > > > all sessions. Maybe there should be some proportionality but that would > > > be more complex and I'm not sure it would gain much. Slot 0 can never > > > be freed of course. > > > > > > I'm very interested to see what people think of the over-all approach, > > > and of the specifics of the code. > > > > > > Thanks, > > > NeilBrown > > > > > > > > > [PATCH 1/4] nfsd: remove artificial limits on the session-based DRC > > > [PATCH 2/4] nfsd: allocate new session-based DRC slots on demand. > > > [PATCH 3/4] nfsd: free unused session-DRC slots > > > [PATCH 4/4] nfsd: add shrinker to reduce number of slots allocated > > > > > >