Re: [PATCH 0/27 v2] Quota scalability patches

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu 31-08-17 19:32:08, Wang Shilong wrote:
> Hello,
> 
> On Thu, Aug 31, 2017 at 5:48 PM, Jan Kara <jack@xxxxxxx> wrote:
> 
>     On Thu 31-08-17 17:09:26, Wang Shilong wrote:
>     > This is without your patch:
>     >
>     > no Quota      quota     quota,project     
>     > 1  851,207     341,941   178,686     
>     > 2  850,368     342,233   191,755     
>     > 3  848,877     342,768    193,807
>     >
>     > With your patchset:
>     > no quota      quota      quota,project
>     > 1  853,391   448,378  385,292 
>     > 2  851,379   448,375  407,716   
>     > 3  850,203   448,415  406,813   
>     >
>     > We still see creation regression here, but your patchset help a lot.
> 
>     OK, still quite some way to go... There will always be some cost associated
>     with quota bookkeeping especially since this has to be synchronized across
>     CPUs to allow for reliable limits checking. But 50% seems quite a bit.
> 
> 
> We run tests with quota enabled, and collecting lock statics using 'perf
> lock'
> 
> Check IO/CPU overload!

Actually, this doesn't look bad. Only 1626 contentions on
s_inode_list_lock, and we totally waited only for 5ms over all CPUs. So I'm
pretty confident contention on that lock is not costing us much. 'perf
lock' seems to show only spinlocks while I think the real contention is
going to be on sleeping locks. You should be able to see that using
lockstat framework in the kernel (Documentation/locking/lockstat.txt).

								Honza
> 
>  Name   acquired  contended   avg wait (ns) total wait (ns)   max wait (ns)  
> min wait (ns) 
> 
>  &(&s->s_inode_li...      16850       1626            3060         4976891    
>       77915            1297 
>  &(&lru->node[i]....       8332        434            3506         1521648    
>      317040            1227 
>  &(&journal->j_li...       3724        105            4784          502326    
>       64196            1311 
>      sparse_irq_lock       3248          0               0               0    
>           0               0 
>  &(&lock->wait_lo...       2636        144           10178         1465730    
>      167087            1345 
>  &(&lock->wait_lo...       2184         81           11999          971986    
>      272372            1369 
>  &(&zone->lock)->...       2010         51            2080          106127    
>        4945             876 
>  &(&dentry->d_loc...       1945          0               0               0    
>           0               0 
>  &(&dentry->d_loc...       1934          0               0               0    
>           0               0 
>  &(&dentry->d_loc...       1933          0               0               0    
>           0               0 
>  &(&dentry->d_loc...       1918          0               0               0    
>           0               0 
>  &(&dentry->d_loc...       1915          0               0               0    
>           0               0 
>  &(&dentry->d_loc...       1910          0               0               0    
>           0               0 
>  &(&dentry->d_loc...       1906          0               0               0    
>           0               0 
> 
> 
> It looks we are still contending on @s_inode_list_lock which related to quota.
> 
> 
> Thanks,
> Shilong
> 
> 
>                                                                     Honza
>    
>     --
>     Jan Kara <jack@xxxxxxxx>
>     SUSE Labs, CR
> 
> 
-- 
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux