> So both you and Waiman complain about negative dentries consuming space > (and I agree, they do) but neither of you has explained why it is a problem. If > memory is ever needed, negative dentries are very easy to reclaim. So to some > extent this is like complaing that page cache consumes your memory - which is > again true but it is a deliberate decision and it helps performance. > > It is possible that some of these dentries are so rarely used that they are > indeed just a waste but then I'd like to see detailed analysis of which negative > dentries are these and how your reclaim heuristics improve the situation. But I > haven't seen any performance numbers from either you or Waiman. So please > gather some performance numbers justifying your change so that we have > something to talk about... > I am sorry, so let me to explain my problem: on my linux machine kernel 4.4, there was one program "foo" doing some backup data and create backup files one by one, which named like foo.1.bak foo.2.bak ... foo.n.bak sequenced. e.g. create foo.2.bak, then delete foo.1.bak, create foo.3.bak then delete foo.2.bak , and one day I found the memory usage was high(total memory was 8G): /proc/meminfo: SReclaimable: 7634344 kB After check the slabinfo: dentry 40001682 40001682 192 21 1 : tunables and the dentry-state: 40001058 39990056 45 0 0 0 The foo has been created about 40 millions of files, and left one file remained, all other Was deleted. At this time, I have one kernel module allocing pages with GFP_ATOMIC flag, some times Allocation failed due to reclaim not very fast(maybe I will modify the parameter of memory Reclaim ...) Another perf issue about keeping large number dentries was, the dentry lookup slowdown The test: I tried to create, close, and remove different files when different number dentries present: (On x86_64 kernel 4.4, 12 cpus Intel(R) Xeon(R) CPU E5645 @ 2.40GHz) When dentries count was 18800 open 1000 files, avg once time 10321.671 ns close 1000 files, avg once time 455.299 ns unlink 1000 files, avg once time 5179.519 ns when dentries count was 40001058 open 1000 files, avg once time 13483.361 ns close 1000 files, avg once time 455.067 ns unlink 1000 files, avg once time 7645.616 ns actually, I can modify the program "foo", and change the backup file's name to be back around like foo.1.bak, foo.2.bak ... foo.100.bak -> foo.1.bak but I am worried, if there are programs create,delete many temporary files and unique, the negative dentries will keep growing. Thanks, Kevin