Re: simplify reconnecting dentries looked up by filehandle

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 15, 2013 at 04:39:28PM -0400, J. Bruce Fields wrote:
> I tested performance with a script that creates an N-deep directory
> tree, gets a filehandle for the bottom directory, writes 2 to
> /proc/sys/vm/drop_caches, then times an open_by_handle_at() of the
> filehandle.  Code at
> 
> 	git://linux-nfs.org/~bfields/fhtests.git
> 
> For directories of various depths, some example observed times (median
> results of 3 similar runs, in seconds), were:
> 
> 		depth:	8000	2000	200
> 	no patches:	11	0.7	0.02
> 	first patch:	 6	0.4	0.01
> 	all patches:	 0.1	0.03	0.01
> 
> For depths < 2000 I used an ugly hack to shrink_slab_node() to force
> drop_caches to free more dentries.  Difference look lost in the noise
> for much smaller depths.

Btw, it would be good to get this wired up in xfstests - add xfs_io
commands for the by handle ops and then just wire up the script driving
them.

I'd also really like to see a stress test for cold handle conversion vs
various VFS ops based on that sort of infrastructure.

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux