Re: Filesystem benchmarks on reasonably fast hardware

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 18, 2011 at 01:40:36PM +0200, Jörn Engel wrote:
> On Mon, 18 July 2011 20:57:49 +1000, Dave Chinner wrote:
> > On Mon, Jul 18, 2011 at 09:53:39AM +0200, Jörn Engel wrote:
> > > On Mon, 18 July 2011 09:32:52 +1000, Dave Chinner wrote:
> > > > On Sun, Jul 17, 2011 at 06:05:01PM +0200, Jörn Engel wrote:
> > 
> > > > > xfs:
> > > > > ====
> > > > > seqrd	1	2	4	8	16	32	64	128
> > > > > 16384	4698	4424	4397	4402	4394	4398	4642	4679	
> > > > > 8192	6234	5827	5797	5801	5795	6114	5793	5812	
> > > > > 4096	9100	8835	8882	8896	8874	8890	8910	8906	
> > > > > 2048	14922	14391	14259	14248	14264	14264	14269	14273	
> > > > > 1024	23853	22690	22329	22362	22338	22277	22240	22301	
> > > > > 512	37353	33990	33292	33332	33306	33296	33224	33271	
> 
> Your patch definitely helps.  Bottom right number is 584741 now.
> Still slower than ext4 or btrfs, but in the right ballpark.  Will
> post the entire block once it has been generated.

The btrfs numbers are through doing different IO. have a look at all
the sub-filesystem block size numbers for btrfs. No matter the
thread count, the number is the same - hardware limits. btrfs is not
doing an IO per read syscall there - I'd say it's falling back to
buffered IO unlink ext4 and xfs....

.....

> seqrd	1	2	4	8	16	32	64	128
> 16384	4542	8311	15738	28955	38273	36644	38530	38527	
> 8192	6000	10413	19208	33878	65927	76906	77083	77102	
> 4096	8931	14971	24794	44223	83512	144867	147581	150702	
> 2048	14375	23489	34364	56887	103053	192662	307167	309222	
> 1024	21647	36022	49649	77163	132886	243296	421389	497581	
> 512	31832	61257	79545	108782	176341	303836	517814	584741	
> 
> Quite a nice improvement for such a small patch.  As they say, "every
> small factor of 17 helps". ;)

And in general the numbers are within a couple of percent of the
ext4 numbers, which is probably a reflection of the slightly higher
CPU cost of the XFS read path compared to ext4.

> What bothers me a bit is that the single-threaded numbers took such a
> noticeable hit...

Is it reproducable? I did notice quite a bit of run-to-run variation
in the numbers I ran. For single threaded numbers, they appear to be
in the order of +/-100 ops @ 16k block size.

> 
> > Ok, the patch below takes the numbers on my test setup on a 16k IO
> > size:
> > 
> > seqrd	1	2	4	8	16
> > vanilla	3603	2798	 2563	not tested...
> > patches 3707	5746	10304	12875	11016
> 
> ...in particular when your numbers improve even for a single thread.
> Wonder what's going on here.

And these were just quoted from a single test run.

> Anyway, feel free to add a Tested-By: or something from me.  And maybe
> fix the two typos below.

Will do.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux