I've seen problem on large server where horde of negative dentries slowed down all lookups significantly: watchdog: BUG: soft lockup - CPU#25 stuck for 22s! [atop:968884] at __d_lookup_rcu+0x6f/0x190 slabtop: OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 85118166 85116916 0% 0.19K 2026623 42 16212984K dentry 16577106 16371723 0% 0.10K 425054 39 1700216K buffer_head 935850 934379 0% 1.05K 31195 30 998240K ext4_inode_cache 663740 654967 0% 0.57K 23705 28 379280K radix_tree_node 399987 380055 0% 0.65K 8163 49 261216K proc_inode_cache 226380 168813 0% 0.19K 5390 42 43120K cred_jar 70345 65721 0% 0.58K 1279 55 40928K inode_cache 105927 43314 0% 0.31K 2077 51 33232K filp 630972 601503 0% 0.04K 6186 102 24744K ext4_extent_status 5848 4269 0% 3.56K 731 8 23392K task_struct 16224 11531 0% 1.00K 507 32 16224K kmalloc-1024 6752 5833 0% 2.00K 422 16 13504K kmalloc-2048 199680 158086 0% 0.06K 3120 64 12480K anon_vma_chain 156128 154751 0% 0.07K 2788 56 11152K Acpi-Operand Total RAM is 256 GB These dentries came from temporary files created and deleted by postgres. But this could be easily reproduced by lookup of non-existent files. Of course, memory pressure easily washes them away. Similar problem happened before around proc sysctl entries: https://lkml.org/lkml/2017/2/10/47 This one does not concentrate in one bucket and needs much more memory. Looks like dcache needs some kind of background shrinker started when dcache size or fraction of negative dentries exceeds some threshold.