This patch is intended to fix the problem: However, I've seen the laundrette running for multiple milliseconds on some workloads, delaying other work. reported by Chuck in [PATCH v1 2/2] NFSD: Change the filecache laundrette workqueue again I believe this problem is mostly likely caused by a lack of scheduling points in nfsd_file_gc() so this patches adds them as needed. On reflecting, I think that the approach here is wrong but that would need a bigger fix. We generally expect a given file to be used repeatedly for a while, and then for accesses to stop when the client closes the file. The nfsd_file_gc() call happens every two seconds with the aim of discarding all the files that haven't been used since two seconds ago. To do this, it scans all the files currently on the list - many of which are likely to have been used recently. This seems like a waste of effort. I think it would be better to have two lists. A and B. When refcount reaches zero for a GC file, it is added to A. When the refcount is incremented we removed from wherever it is. Every 2 seconds we free everything on B and then splice A across to B. This completely avoids walking through all the still-active files, moving them to the end of the LRU. However this would make the shrinker a little more complex as we wouldn't be able to use list_lru. So I'm not proposing that immediately but would like to know what others think first. Thanks, NeilBrown [PATCH] nfsd: add scheduling point in nfsd_file_gc()