Re: [PATCH] fs/dcache: dentries should free after files unlinked or directories removed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu 31-08-17 12:27:27, Waiman Long wrote:
> On 08/31/2017 03:53 AM, Jan Kara wrote:
> > On Sun 27-08-17 11:05:34, Waiman Long wrote:
> >>
> >> It is certainly true that the current scheme of unlimited negative
> >> dentry creation is not a problem under most cases. However, there are
> >> scenarios where it can be a problem.
> >>
> >> A customer discovered the negative dentry issue because of a bug in
> >> their application code. They fixed their code to solve the problem.
> >> However, they wondered if this could be used as one vector of a DoS
> >> attack on a Linux system by having a rogue program generate massive
> >> number of negative dentries continuously. It is the thought of this
> >> malicious use of the negative dentry behavior that prompted me to create
> >> and send out a patch to limit the number of negative dentries allowable
> >> in a system.
> > Well, and how is this fundamentally different from a user consuming
> > resources by other means (positive dentries, inode cache, page cache, anon
> > memory etc.)? Sure you can force slab reclaim to work hard but you have
> > many other ways how a local user can do that. So if you can demonstrate
> > that it is too easy to DoS a system in some way, we can talk about
> > mitigating the attack. But just the ability of making the system busy does
> > not seem serious to me.
> 
> Positive dentries are limited by the total number of files in the file
> system. Negative dentries, on the other hand, have no such limit. There
> are ways to limit other resource usages such as limiting memory usage of
> a user by memory cgroup, filesystem quota for amount of disk space or
> number of files created or owned, etc. However, I am not aware of any
> control mechanism that can limit the number negative dentries generated
> by a given user. That makes negative dentries somewhat different from
> the other resource types that you are talking about.

So I agree they are somewhat different but not fundamentally different -
e.g. total number of files in the file system can be easily so high that
dentries + inodes cannot fit into RAM and thus you are in a very similar
situation as with negative dentries. That's actually one of the reasons why
people were trying to bend memcgs to account slab cache as well. But it
didn't end anywhere AFAIK.

The reason why I'm objecting is that the limit on the number of negative
dentries is another tuning knob, it is for very specific cases, and most of
sysadmins will have no clue how to set it properly (even I wouldn't have a
good idea).

> >> Besides, Kevin had shown that keeping the dentry cache from growing too
> >> big was good for file lookup performance too.
> > Well, that rather speaks for better data structure for dentry lookup (e.g.
> > growing hash tables) rather than for limiting negative dentries? I can
> > imagine there are workloads which would benefit from that as well?
> 
> Current dentry lookup is through a hash table. The lookup performance
> will depend on the number of hashed slots as well as the number of
> entries queued in each slot. So in general, lookup performance
> deteriorates the more entries you put into a given slot. That is true no
> matter how many slots you have allocated.

Agreed, but with rhashtables the number of slots grows dynamically with the
number of entries...

								Honza

-- 
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux