On Wed, 2018-07-11 at 11:13 -0400, Waiman Long wrote: > On 07/11/2018 06:21 AM, Michal Hocko wrote: > > On Tue 10-07-18 12:09:17, Waiman Long wrote: [...] > > > I am going to reduce the granularity of each unit to 1/1000 of > > > the total system memory so that for large system with TB of > > > memory, a smaller amount of memory can be specified. > > > > It is just a matter of time for this to be too coarse as well. > > The goal is to not have too much memory being consumed by negative > dentries and also the limit won't be reached by regular daily > activities. So a limit of 1/1000 of the total system memory will be > good enough on large memory system even if the absolute number is > really big. OK, I think the reason we're going round and round here without converging is that one of the goals of the mm subsystem is to manage all of our cached objects and to it the negative (and positive) dentries simply look like a clean cache of objects. Right at the moment mm manages them in the same way it manages all the other caches, a lot of which suffer from the "you can cause lots of allocations to artificially grow them" problem. So the main question is why doesn't the current mm control of the caches work well enough for dentries? What are the problems you're seeing that mm should be catching? If you can answer this, then we could get on to whether a separate shrinker, cache separation or some fix in mm itself is the right answer. What you say above is based on a conclusion: limiting dentries improves the system performance. What we're asking for is evidence for that conclusion so we can explore whether the same would go for any of our other system caches (so do we have a global cache management problem or is it only the dentry cache?) James