On 07/12/2018 12:04 PM, James Bottomley wrote: > On Thu, 2018-07-12 at 11:54 -0400, Waiman Long wrote: >> >> It is not that dentry cache is harder to get rid of than the other >> memory. It is that the ability of generate unlimited number of >> negative dentries that will displace other useful memory from the >> system. What the patch is trying to do is to have a warning or >> notification system in place to spot unusual activities in regard to >> the number of negative dentries in the system. The system >> administrators can then decide on what to do next. > But every cache has this property: I can cause the same effect by doing > a streaming read on a multi gigabyte file: the page cache will fill > with the clean pages belonging to the file until I run out of memory > and it has to start evicting older cache entries. Once we hit the > steady state of minimal free memory, the mm subsytem tries to balance > the cache requests (like my streaming read) against the existing pool > of cached objects. > > The question I'm trying to get an answer to is why does the dentry > cache need special limits when the mm handling of the page cache (and > other mm caches) just works? > > James > I/O activities can be easily tracked. Generation of negative dentries, however, is more insidious. So the ability to track and be notified when too many negative dentries are created can be a useful tool for the system administrators. Besides, there are paranoid users out there who want to have control of as much as system parameters as possible. Cheers, Longman -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html