----- Original Message ----- | Hello, | | We have a bunch of servers that create a lot of temp files, or check | for the existence of non-existent files. Every such operation creates | a dentry object and soon most of the free memory is consumed for | 'negative' dentry entries. This behavior was observed on both CentOS | kernel v.2.6.32-358 and Amazon Linux kernel v.3.4.43-4. | | There are also some processes running that occasionally allocate large | chunks of memory, and when this happens the kernel clears out a bunch | of stale dentry caches. This clearing takes some time. kswapd kicks | in, and allocations and bzero() of 4GB that normally takes <1s, takes | 20s or more. | | Because the memory needs are non-continuous but negative dentry | generation is fairly continuous, vfs_cache_pressure doesn't help much. | | The thought I had was to have a sysctl that limits the number of | dentries per super-block (sb-max-dentry). Everytime a new dentry is | allocated in d_alloc(), check if dentry_stat.nr_dentry exceeds (number | of super blocks * sb-max-dentry). If yes, queue up an asynchronous | workqueue call to prune_dcache(). Also have a separate sysctl to | indicate by what percentage to reduce the dentry entries when this | happens. | | Thanks for your input. If this sounds like a reasonable idea, I'll | send out a patch. | | Cheers, | Keyur. Hi Keyur, I like the idea. I've had people bring up the same issue, relating to GFS2. This is especially true for doing du and similar ops on a very large file system. This wasn't on GFS2, was it? Regards, Bob Peterson Red Hat File Systems -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html