Re: Limit dentry cache entries

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 28, 2013 at 6:49 AM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> On Tue, May 28, 2013 at 02:12:26AM -0400, Keyur Govande wrote:
>> On Sun, May 26, 2013 at 7:23 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
>> > On Fri, May 24, 2013 at 11:12:50PM -0400, Keyur Govande wrote:
>> >> On Mon, May 20, 2013 at 6:53 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
>> >> > On Sun, May 19, 2013 at 11:50:55PM -0400, Keyur Govande wrote:
>> >> >> Hello,
>> >> >>
>> >> >> We have a bunch of servers that create a lot of temp files, or check
>> >> >> for the existence of non-existent files. Every such operation creates
>> >> >> a dentry object and soon most of the free memory is consumed for
>> >> >> 'negative' dentry entries. This behavior was observed on both CentOS
>> >> >> kernel v.2.6.32-358 and Amazon Linux kernel v.3.4.43-4.
> ....
>> >> Also, setting a bad value for the knob would negatively impact file-IO
>> >> performance, which on a spinning disk isn't guaranteed anyway. The
>> >> current situation tanks memory performance which is more unexpected to
>> >> a normal user.
>> >
>> > Which is precisely why a knob is the wrong solution. If it's
>> > something a normal, unsuspecting user has problems with, then it
>> > needs to be handled automatically by the kernel. Expecting users who
>> > don't even know what a dentry is to know about a magic knob that
>> > fixes a problem they don't even know they have is not an acceptable
>> > solution.
>> >
>> > The first step to solving such a problem is to provide a
>> > reproducable, measurable test case in a simple script that
>> > demonstrates the problem that needs solving. If we can reproduce it
>> > at will, then half the battle is already won....
>>
>> Here's a simple test case: https://gist.github.com/keyurdg/5660719 to
>> create a ton of dentry cache entries, and
>> https://gist.github.com/keyurdg/5660723 to allocate some memory.
>>
>> I kicked off 3 instances of fopen in 3 different prefixed directories.
>> After all the memory was filled up with dentry entries, I tried
>> allocating 4GB of memory. It took ~20s. If I turned off the dentry
>> generation programs and attempted to allocate 4GB again, it only took
>> 2s (because the memory was already free). Here's a quick graph of this
>> behavior: http://i.imgur.com/XhgX84d.png
>
> News at 11! Memory allocation when memory is full is slower than
> when it's empty!
>
> That's not what I was asking for. We were talking about negative
> dentry buildup and possibly containing that, not a strawman "I can
> fill all of memory with dentries by creating files" workload.

By passing in a mode of "r" like: "./fopen test1 r & ./fopen test2 r
&" you can create a ton of negative dentry cache entries.

>
> IOWs, your example is not demonstrating the problem you complained
> about. We are not going to put a global limit on active dentries.
>
> If you really want a global dentry cache size limit or to ensure
> that certain processes have free memory available for use, then
> perhaps you should be looking at what you can control with cgroups.
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux