On Sun, Sep 14, 2014 at 9:58 PM, Tetsuo Handa <penguin-kernel@xxxxxxxxxxxxxxxxxxx> wrote: > > OK. I quote comment #5 and #9 of RHBZ 1061562 below. Looks like that BZ entry is somehow restricted, so I can't see the details, but it turns out that I think we've fixed this issue a long time ago for other reasons. We definitely can use a lot of memory on negative dentries (see my own test program attached) and that will inevitably lead to *some* issues (because we do get memory pressure and then have to get to shrink_slabs etc to get rid of them). But the negative dentries should be fairly easy to get rid of, and the real problem in that bugzilla seems to be the dcache_lock. And that dcache_lock doesn't exist at all any more, and dentries should scale almost infinitely. So I'd like to have some way to limit excessive negative dentries anyway, because they obviously do fill up the hash lists and use up memory, so they can certainly be problematic, but I don't think they are necessarily any worse than streaming a large file and filling up memory that way. Except, likely, that the "streaming a large file" is so much more common that we may well have more problems with negative dentries just because nobody actually does this in practice. I didn't see anything obvious from my test-program, but I didn't run any real latency test or anything, just a "can't tell a difference in normal use" aside from the normal "ok, no free memory". Linus
int main(int argc, char *argv) { int i; char name[] = "a0000000000"; for (;;) { char *p = name + 10; while (++*p > '9') { if (*p == 'b') return; *p = '0'; --p; } stat(name); } return 0; }