On Mon, Jan 27, 2025 at 12:20:31PM +1100, NeilBrown wrote: > [ > davec added to cc incase I've said something incorrect about list_lru > > Changes in this version: > - no _bh locking > - add name for a magic constant > - remove unnecessary race-handling code > - give a more meaningfule name for a lock for /proc/lock_stat > - minor cleanups suggested by Jeff > > ] > > The nfsd filecache currently uses list_lru for tracking files recently > used in NFSv3 requests which need to be "garbage collected" when they > have becoming idle - unused for 2-4 seconds. > > I do not believe list_lru is a good tool for this. It does not allow > the timeout which filecache requires so we have to add a timeout > mechanism which holds the list_lru lock while the whole list is scanned > looking for entries that haven't been recently accessed. When the list > is largish (even a few hundred) this can block new requests noticably > which need the lock to remove a file to access it. Looks entirely like a trivial implementation bug in how the list_lru is walked in nfsd_file_gc(). static void nfsd_file_gc(void) { LIST_HEAD(dispose); unsigned long ret; ret = list_lru_walk(&nfsd_file_lru, nfsd_file_lru_cb, &dispose, list_lru_count(&nfsd_file_lru)); ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ trace_nfsd_file_gc_removed(ret, list_lru_count(&nfsd_file_lru)); nfsd_file_dispose_list_delayed(&dispose); } i.e. the list_lru_walk() has been told to walk the entire list in a single lock hold if nothing blocks it. We've known this for a long, long time, and it's something we've handled for a long time with shrinkers, too. here's the typical way of doing a full list aging and GC pass in one go without excessively long lock holds: { long nr_to_scan = list_lru_count(&nfsd_file_lru); LIST_HEAD(dispose); while (nr_to_scan > 0) { long batch = min(nr_to_scan, 64); list_lru_walk(&nfsd_file_lru, nfsd_file_lru_cb, &dispose, batch); if (list_empty(&dispose)) break; dispose_list(&dispose); nr_to_scan -= batch; } } And we don't need two lists to separate recently referenced vs gc candidates because we have a referenced bit in the nf->nf_flags. i.e. nfsd_file_lru_cb() does: nfsd_file_lru_cb(struct list_head *item, struct list_lru_one *lru, void *arg) { .... /* If it was recently added to the list, skip it */ if (test_and_clear_bit(NFSD_FILE_REFERENCED, &nf->nf_flags)) { trace_nfsd_file_gc_referenced(nf); return LRU_ROTATE; } ..... Which moves recently referenced entries to the far end of the list, resulting in all the reclaimable objects congrating at the end of the list that is walked first by list_lru_walk(). IOWs, a batched walk like above resumes the walk exactly where it left off, because it is always either reclaiming or rotating the object at the head of the list. > This patch removes the list_lru and instead uses 2 simple linked lists. > When a file is accessed it is removed from whichever list it is on, > then added to the tail of the first list. Every 2 seconds the second > list is moved to the "freeme" list and the first list is moved to the > second list. This avoids any need to walk a list to find old entries. Yup, that's exactly what the current code does via the laundrette work that schedules nfsd_file_gc() to run every two seconds does. > These lists are per-netns rather than global as the freeme list is > per-netns as the actual freeing is done in nfsd threads which are > per-netns. The list_lru is actually multiple lists - it is a per-numa node list and so moving to global scope linked lists per netns is going to reduce scalability and increase lock contention on large machines. I also don't see any perf numbers, scalability analysis, latency measurement, CPU profiles, etc showing the problems with using list_lru for the GC function, nor any improvement this new code brings. i.e. It's kinda hard to make any real comment on "I do not believe list_lru is a good tool for this" when there is no actual measurements provided to back the statement one way or the other... -Dave. -- Dave Chinner david@xxxxxxxxxxxxx