On 09/04/2017 08:59 AM, Jan Kara wrote: > > So I agree they are somewhat different but not fundamentally different - > e.g. total number of files in the file system can be easily so high that > dentries + inodes cannot fit into RAM and thus you are in a very similar > situation as with negative dentries. That's actually one of the reasons why > people were trying to bend memcgs to account slab cache as well. But it > didn't end anywhere AFAIK. > > The reason why I'm objecting is that the limit on the number of negative > dentries is another tuning knob, it is for very specific cases, and most of > sysadmins will have no clue how to set it properly (even I wouldn't have a > good idea). Thanks for letting me know which part of the patch you are objecting to. As suggested by Linus, I can easily change the patch to do some kind of auto-tuning depending on the positive-negative dentries ratio without needing a user configurable kernel command line option. I added that option to make the patch more flexible, but I do agree that most people will likely leave it at the default value without ever using it. >> Current dentry lookup is through a hash table. The lookup performance >> will depend on the number of hashed slots as well as the number of >> entries queued in each slot. So in general, lookup performance >> deteriorates the more entries you put into a given slot. That is true no >> matter how many slots you have allocated. > Agreed, but with rhashtables the number of slots grows dynamically with the > number of entries... Currently, alloc_large_system_hash() is scaling the number of hash slots according to the system memory size. That is adequate in most cases. Using rhashtable will add a little bit of overhead into the hash index computation, so we will probably see a little bit of slowdown with small number of dentries and a bit of speed-up with large number of dentries. That may not be a trade-off we would like to take. Cheers, Longman