RE: Need support about value IOPS Which FIO measured

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Nguyen Viet Dung [mailto:dungnguyenviet@xxxxxxxxx]
> Sent: Friday, April 10, 2015 2:23 AM
> To: Elliott, Robert (Server Storage)
> Cc: fio@xxxxxxxxxxxxxxx
> Subject: Re: Need support about value IOPS Which FIO measured
> 
> Dear Elliot, Robert
> Thanks you so much for your suggest. That result which i did with fio
> follow your suggest is good.
> So, i have some parametter on result of fio which i don't understand.
> e.g:
> Seek time on result of fio
> Latency time
> Which i can measure performance of my harddisk
> I sent for a result which i measure on " 500GB RE4 WD Sata/64MB
> WD5003ABYX-01WERA1" (RPM 7200k, support 3.5Gbp/s)
> IOPS write 24, read=35. (i think it very low). So do you think about it.
> I hope your reply. Thanks

A few suggestions:

> Run status group 0 (all jobs):
>    READ: io=500KB, aggrb=254KB/s, minb=120KB/s, maxb=140KB/s,
> mint=1885msec, maxt=1962msec
>   WRITE: io=1128KB, aggrb=593KB/s, minb=97KB/s, maxb=496KB/s,
> mint=1885msec, maxt=1901msec
> 
> Disk stats (read/write):
>   sda: ios=108/280, merge=0/0, ticks=55876/58507, in_queue=144753,
> util=94.47%

You should run the test a lot longer... this test run only transferred 
500+1128 KiB (~ 1 MiB).

>     lat (usec) : 500=4.46%, 750=1.79%, 1000=2.68%
>     lat (msec) : 2=4.46%, 10=1.79%, 20=0.89%, 50=8.04%, 100=9.82%
>     lat (msec) : 250=9.82%, 500=9.82%, 750=16.96%, 1000=0.89%, 2000=28.57%

What those mean is:
* 4+1+2 = 7% of the time, the latency was 1 ms or less.
  These are probably read cache hits
* ...
* 8+9+9+9+16 = 51% of the time, the latencies ranged from 50 ms to 750 ms.
  These are normal accesses.
* 28% of the time, the latencies were 1 to 2 s
  These are either from the drive doing some background activity, or 
  your queue depth is too high, the I/Os are just stacking up somewhere,
  or even that the drive was spundown and took a while to resume.

All of those suggest iodepth=32 is too high for this drive; you should
be seeing single or double digit ms latencies dominate.

Results may change with a longer run time.


��.n��������+%������w��{.n�������^n�r������&��z�ޗ�zf���h���~����������_��+v���)ߣ�


[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux