Re: [PATCH RFC 0/6] fix the negative dentres bloating system memory usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 21, 2021 at 06:49:39PM +0530, Gautham Ananthakrishna wrote:

> We tested this patch set recently and found it limiting negative dentry to a
> small part of total memory. The following is the test result we ran on two
> types of servers, one is 256G memory with 24 CPUS and another is 3T memory
> with 384 CPUS. The test case is using a lot of processes to generate negative
> dentry in parallel, the following is the test result after 72 hours, the
> negative dentry number is stable around that number even after running longer
> for much longer time. Without the patch set, in less than half an hour 197G was
> taken by negative dentry on 256G system, in 1 day 2.4T was taken on 3T system.
> 
> system memory   neg-dentry-number   neg-dentry-mem-usage
> 256G            55259084            10.6G
> 3T              202306756           38.8G
> 
> For perf test, we ran the following, and no regression found.
> 
> 1. create 1M negative dentry and then touch them to convert them to positive
>    dentry
> 
> 2. create 10K/100K/1M files
> 
> 3. remove 10K/100K/1M files
> 
> 4. kernel compile

Good for you; how would that work for thinner boxen, though?  I agree that if you
have 8M hash buckets your "no more than 3 unused negatives per bucket" is generous
enough for everything, but that's less obvious for something with e.g 4 or 8 gigs.
And believe it or not, there are real-world boxen like that ;-)



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux