Re: 答复:答复:答复:md raid5 random performace 6x SSD RAID5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/30/2013 8:12 AM, lilofile wrote:
> thanks.  now i use fio to test random write performance

You were using dd for testing your array throughput.  dd uses single
thread sequential IO which does not fully tax your hardware and thus
does not provide realistic results.  I recommended you use FIO with many
threads which will tax your hardware.  The purpose of this was three fold:

1.  Show the difference between single and multiple thread throughput
2.  Show the peak hardware streaming throughput you might achieve
3.  Show the effects of stripe_cache_size as IO rate increases

Please show the FIO multi thread streaming results, with
stripe_cache_size of 2048, 4096, 8192 so everyone can see the
differences, and so those results are in the list archive.  This
information is useful to others in the future.  Please show these
results before we move on to discussing random IO performance.

Remember, getting help on a mailing list isn't strictly for your
benefit, but the benefit of everyone.  So when you are instructed to run
a test, always post the results, as they are for everyone's benefit, not
just yours.

Thanks.

> why the random write performance is so low, 6X SSD , 4k IOPS write random only 55097?  when I use FIO,the single SSD random 4k write reach to 3.5W.
> 
> root@host0:/# fio -filename=/dev/md0     -iodepth=16 -thread -rw=randwrite -ioengine=libaio -bs=4k -size=30G  -numjobs=10 -runtime=1000 -group_reporting -name=mytest 
> mytest: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=16
> ...
> mytest: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=16
> fio 1.59
> Starting 10 threadsJobs: 1 (f=1): [____w_____] [68.3% done] [0K/0K /s] [0 /0  iops] [eta 07m:53s]       s]
> mytest: (groupid=0, jobs=10): err= 0: pid=6099
>   write: io=215230MB, bw=220392KB/s, iops=55097 , runt=1000019msec
>     slat (usec): min=1 , max=337733 , avg=176.46, stdev=2623.23
>     clat (usec): min=4 , max=540048 , avg=2667.83, stdev=10078.16
>      lat (usec): min=40 , max=576049 , avg=2844.42, stdev=10399.30
>     bw (KB/s) : min=    0, max=1100192, per=10.22%, avg=22514.48, stdev=17262.85
>   cpu          : usr=6.70%, sys=16.48%, ctx=11656865, majf=46, minf=1626216
>   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
>      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
>      issued r/w/d: total=0/55098999/0, short=0/0/0
>      lat (usec): 10=0.01%, 50=41.01%, 100=50.01%, 250=1.23%, 500=0.42%
>      lat (usec): 750=0.02%, 1000=0.01%
>      lat (msec): 2=0.01%, 4=0.01%, 10=0.05%, 20=0.16%, 50=6.58%
>      lat (msec): 100=0.44%, 250=0.05%, 500=0.01%, 750=0.01%
> 
> Run status group 0 (all jobs):
>   WRITE: io=215230MB, aggrb=220391KB/s, minb=225681KB/s, maxb=225681KB/s, mint=1000019msec, maxt=1000019msec
> 
> Disk stats (read/write):
>   md0: ios=167/49755890, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=12530125/13199536, aggrmerge=1151802/1283069, aggrticks=14762174/11503916, aggrin_queue=26230996, aggrutil=95.56%
>     sdh: ios=12519812/13192529, merge=1157990/1291154, ticks=11854444/8141456, in_queue=19960416, util=90.19%
>     sdi: ios=12524619/13201735, merge=1158477/1280984, ticks=12161064/8308572, in_queue=20436280, util=90.56%
>     sdj: ios=12526628/13210796, merge=1155512/1274875, ticks=12074040/8250524, in_queue=20289960, util=90.63%
>     sdk: ios=12534367/13213646, merge=1148527/1268088, ticks=12372792/8455368, in_queue=20791752, util=90.81%
>     sdl: ios=12534777/13205894, merge=1147263/1275381, ticks=12632824/8728444, in_queue=21325724, util=90.86%
>     sdm: ios=12540551/13172620, merge=1143048/1307937, ticks=27477880/27139136, in_queue=54581844, util=95.56%
> 
> 
> 
> 
> 
> ------------------------------------------------------------------
> 发件人:Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
> 发送时间:2013年11月29日(星期五) 10:38
> 收件人:lilofile <lilofile@xxxxxxxxxx>; Linux RAID <linux-raid@xxxxxxxxxxxxxxx>
> 主 题:Re: 答复:答复:md raid5 performace 6x SSD RAID5
> 
> On 11/28/2013 4:02 AM, lilofile wrote:
>> thank you  for your advise. now I have test multi-thread patch, the single raid5 performance improve 30%.
>>
>> but I have another problem,when write on single raid,write performance is  approx 1.1GB/s 
> ...
>> [1]-  Done                    dd if=/dev/zero of=/dev/md126 count=100000 bs=1M
>> [2]+  Done                    dd if=/dev/zero of=/dev/md127 count=100000 bs=1M
> 
> No.  This is not a parallel IO test.
> 
> ...
>> To address #3 use FIO or a similar testing tool that can issue IOs in
>> parallel.  With SSD based storage you will never reach maximum
>> throughput with a serial data stream.
> 
> This is a parallel IO test, one command line:
> 
> ~# fio --directory=/dev/md126 --zero_buffers --numjobs=16
> --group_reporting --blocksize=64k --ioengine=libaio --iodepth=16
> --direct=1 --size=64g --name=read --rw=read --stonewall --name=write
> --rw=write --stonewall
> 
> Normally this targets a filesystem, not a raw block device.  This
> command line should work for a raw md device.
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux