Re: [PATCH] fs/dcache: dentries should free after files unlinked or directories removed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/26/2017 12:18 PM, Linus Torvalds wrote:
> On Fri, Aug 25, 2017 at 11:56 PM, Wangkai (Kevin,C)
> <wangkai86@xxxxxxxxxx> wrote:
>> but I am worried, if there are programs create,delete many temporary files and unique,
>> the negative dentries will keep growing.
> The thing is, this has nothing to do with unlink.
>
> The *easiest* way to generate negative dentries is in fact to never
> create any files at all: just look up millions of non-existent names.
>
> IOW, just something like this
>
>     #include <stdio.h>
>     #include <sys/types.h>
>     #include <sys/stat.h>
>     #include <unistd.h>
>
>     int main()
>     {
>         int i;
>         for (i = 0; i < 10000000; i++) {
>                 char name[20];
>                 struct stat st;
>
>                 snprintf(name, sizeof(name), "n:%d", i);
>                 stat(name, &st);
>         }
>         return 0;
>     }
>
> is a much easier and faster way to create negative dentries.
>
> And yes, it's entirely possible that we could/should have some way to
> balance negative dentries against positive ones, but on the whole this
> has not really come up as a huge problem.

It is certainly true that the current scheme of unlimited negative
dentry creation is not a problem under most cases. However, there are
scenarios where it can be a problem.

A customer discovered the negative dentry issue because of a bug in
their application code. They fixed their code to solve the problem.
However, they wondered if this could be used as one vector of a DoS
attack on a Linux system by having a rogue program generate massive
number of negative dentries continuously. It is the thought of this
malicious use of the negative dentry behavior that prompted me to create
and send out a patch to limit the number of negative dentries allowable
in a system.

Besides, Kevin had shown that keeping the dentry cache from growing too
big was good for file lookup performance too.

Cheers,
Longman
> For example, your module that does a lot of GFP_ATOMIC allocations -
> if it wasn't for dentries, it would have been something else.
> GFP_ATOMIC *will* fail after a while, because it just can't replenish
> the free memory. That's fundamental. That's what GFP_ATOMIC _means_.
> It's very much meant for "occasional critical allocations", and if you
> do just GFP_ATOMIC, you will fail.
>
>                  Linus






[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux