Re: Filesystem benchmarks on reasonably fast hardware

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Jul 17, 2011 at 06:05:01PM +0200, Jörn Engel wrote:
> Hello everyone!
> 
> Recently I have had the pleasure of working with some nice hardware
> and the displeasure of seeing it fail commercially.  However, when
> trying to optimize performance I noticed that in some cases the
> bottlenecks were not in the hardware or my driver, but rather in the
> filesystem on top of it.  So maybe all this may still be useful in
> improving said filesystem.
> 
> Hardware is basically a fast SSD.  Performance tops out at about
> 650MB/s and is fairly insensitive to random access behaviour.  Latency
> is about 50us for 512B reads and near 0 for writes, through the usual
> cheating.
> 
> Numbers below were created with sysbench, using directIO.  Each block
> is a matrix with results for blocksizes from 512B to 16384B and thread
> count from 1 to 128.  Four blocks for reads and writes, both
> sequential and random.

What's the command line/script used to generate the result matrix?
And what kernel are you running on?

> xfs:
> ====
> seqrd	1	2	4	8	16	32	64	128
> 16384	4698	4424	4397	4402	4394	4398	4642	4679	
> 8192	6234	5827	5797	5801	5795	6114	5793	5812	
> 4096	9100	8835	8882	8896	8874	8890	8910	8906	
> 2048	14922	14391	14259	14248	14264	14264	14269	14273	
> 1024	23853	22690	22329	22362	22338	22277	22240	22301	
> 512	37353	33990	33292	33332	33306	33296	33224	33271	

Something is single threading completely there - something is very
wrong. Someone want to send me a nice fast pci-e SSD - my disks
don't spin that fast... :/

> rndrd	1	2	4	8	16	32	64	128
> 16384	4585	8248	14219	22533	32020	38636	39033	39054	
> 8192	6032	11186	20294	34443	53112	71228	78197	78284	
> 4096	8247	15539	29046	52090	86744	125835	154031	157143	
> 2048	11950	22652	42719	79562	140133	218092	286111	314870	
> 1024	16526	31294	59761	112494	207848	348226	483972	574403	
> 512	20635	39755	73010	130992	270648	484406	686190	726615	
> 
> seqwr	1	2	4	8	16	32	64	128
> 16384	39956	39695	39971	39913	37042	37538	36591	32179	
> 8192	67934	66073	30963	29038	29852	25210	23983	28272	
> 4096	89250	81417	28671	18685	12917	14870	22643	22237	
> 2048	140272	120588	140665	140012	137516	139183	131330	129684	
> 1024	217473	147899	210350	218526	219867	220120	219758	215166	
> 512	328260	181197	211131	263533	294009	298203	301698	298013	
> 
> rndwr	1	2	4	8	16	32	64	128
> 16384	38447	38153	38145	38140	38156	38199	38208	38236	
> 8192	78001	76965	76908	76945	77023	77174	77166	77106	
> 4096	160721	156000	157196	157084	157078	157123	156978	157149	
> 2048	325395	317148	317858	318442	318750	318981	319798	320393	
> 1024	434084	649814	650176	651820	653928	654223	655650	655818	
> 512	501067	876555	1290292	1217671	1244399	1267729	1285469	1298522	

I'm assuming that is the h/w can do 650MB/s then the numbers are in
iops? from 4 threads up all results equate to 650MB/s.

> Sequential reads are pretty horrible.  Sequential writes are hitting a
> hot lock again.

lockstat output?

> So, if anyone would like to improve one of these filesystems and needs
> more data, feel free to ping me.

Of course I'm interested. ;)

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux