Measuring IOPS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

I am currently writing an article about fio for a german print magazine 
after having packaged it for Debian and using it in performance analysis & 
tuning trainings.

After introducting into the concepts of fio with some basic job files I´d 
like how to do meaningful IOPS measurements that also work with SSDs that 
compress.

For some first tests I came up with:

martin@merkaba:~[…]> cat iops.job 
[global]
size=2G
bsrange=2-16k
filename=iops1
numjobs=1
iodepth=1
# Zufällige Daten für SSDs, die komprimieren
refill_buffers=1

[zufälligschreiben]
rw=randwrite
stonewall
[sequentiellschreiben]
rw=write
stonewall

[zufälliglesen]
rw=randread
stonewall
[sequentielllesen]
rw=read

(small german dictionary:
- zufällig => random
- lesen => read
- schreiben => write;)

This takes the following into account:
- It is recommended to just use one process. Why actually? Why not just 
filling the device with as much requests as possible and see what it can 
handle?

- I do instruct fio first to write random data by even refilling the buffer 
with different random data for each write - thats for compressing SSDs, 
those with newer sandforce chips

- I let it do sync I/O cause I want to measure the device, not the cache 
speed. I considered direct I/O, but at least with sync I/O engine it does 
not work on Linux 3.0 with Ext4 on an LVM: invalid request. This may or 
may not be expected. I wondering whether direct I/O is for complete 
devices, not for filesystems.


Things I didn´t consider:
- I do not use the complete device. For obvious reasons here: I tested on 
a SSD that I use for production work as well ;).
  - Thus for a harddisk this might not be realistic enough, cause a 
harddisk has different speeds at different cylinders. I think for 2-16 KB 
request it shouldn´t matter tough.
  - I am considering a read test on the complete device

- The test does not go directly to the device, so there might be some Ext4 
/ LVM overhead. On the ThinkPad T520 with Intel i5 Sandybridge Dual Core 
CPU I think this is negligible.

- 2 GB might not be enough for reliable measurements


Do you think the above job file could give realistic results? Any 
suggestions?


I got these results:

martin@merkaba:~[…]> ./fio iops.job 
zufälligschreiben: (g=0): rw=randwrite, bs=2-16K/2-16K, ioengine=sync, 
iodepth=1
sequentiellschreiben: (g=1): rw=write, bs=2-16K/2-16K, ioengine=sync, 
iodepth=1
zufälliglesen: (g=2): rw=randread, bs=2-16K/2-16K, ioengine=sync, 
iodepth=1
sequentielllesen: (g=2): rw=read, bs=2-16K/2-16K, ioengine=sync, iodepth=1
fio 1.57
Starting 4 processes
Jobs: 1 (f=1): [__r_] [100.0% done] [561.9M/0K /s] [339K/0  iops] [eta 
00m:00s]                  
zufälligschreiben: (groupid=0, jobs=1): err= 0: pid=23221
  write: io=2048.0MB, bw=16971KB/s, iops=5190 , runt=123573msec
    clat (usec): min=0 , max=275675 , avg=183.76, stdev=989.34
     lat (usec): min=0 , max=275675 , avg=184.02, stdev=989.36
    bw (KB/s) : min=  353, max=94417, per=99.87%, avg=16947.64, 
stdev=11562.05
  cpu          : usr=5.39%, sys=14.47%, ctx=344861, majf=0, minf=30
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 
>=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
>=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
>=64=0.0%
     issued r/w/d: total=0/641383/0, short=0/0/0
     lat (usec): 2=4.48%, 4=23.03%, 10=27.46%, 20=5.22%, 50=2.17%
     lat (usec): 100=0.08%, 250=10.16%, 500=21.35%, 750=4.79%, 1000=0.06%
     lat (msec): 2=0.13%, 4=0.64%, 10=0.40%, 20=0.01%, 50=0.01%
     lat (msec): 100=0.01%, 250=0.01%, 500=0.01%
sequentiellschreiben: (groupid=1, jobs=1): err= 0: pid=23227
  write: io=2048.0MB, bw=49431KB/s, iops=6172 , runt= 42426msec
    clat (usec): min=0 , max=83105 , avg=134.18, stdev=1286.14
     lat (usec): min=0 , max=83105 , avg=134.53, stdev=1286.14
    bw (KB/s) : min=    0, max=73767, per=109.57%, avg=54162.16, 
stdev=22989.92
  cpu          : usr=10.29%, sys=22.17%, ctx=232818, majf=0, minf=33
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 
>=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
>=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
>=64=0.0%
     issued r/w/d: total=0/261869/0, short=0/0/0
     lat (usec): 2=0.10%, 4=1.16%, 10=9.97%, 20=1.14%, 50=0.09%
     lat (usec): 100=27.31%, 250=59.37%, 500=0.61%, 750=0.04%, 1000=0.02%
     lat (msec): 2=0.04%, 4=0.06%, 10=0.01%, 20=0.06%, 50=0.01%
     lat (msec): 100=0.03%
zufälliglesen: (groupid=2, jobs=1): err= 0: pid=23564
  read : io=2048.0MB, bw=198312KB/s, iops=60635 , runt= 10575msec
    clat (usec): min=0 , max=103758 , avg=14.46, stdev=1058.66
     lat (usec): min=0 , max=103758 , avg=14.50, stdev=1058.66
    bw (KB/s) : min=   98, max=1996998, per=54.76%, avg=217197.79, 
stdev=563543.94
  cpu          : usr=11.20%, sys=8.25%, ctx=513, majf=0, minf=28
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 
>=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
>=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
>=64=0.0%
     issued r/w/d: total=641220/0/0, short=0/0/0
     lat (usec): 2=77.54%, 4=21.11%, 10=1.19%, 20=0.09%, 50=0.01%
     lat (usec): 100=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
     lat (msec): 2=0.01%, 4=0.03%, 10=0.01%, 20=0.01%, 50=0.01%
     lat (msec): 100=0.01%, 250=0.01%
sequentielllesen: (groupid=2, jobs=1): err= 0: pid=23565
  read : io=2048.0MB, bw=235953KB/s, iops=29458 , runt=  8888msec
    clat (usec): min=0 , max=71904 , avg=30.61, stdev=278.25
     lat (usec): min=0 , max=71904 , avg=30.71, stdev=278.25
    bw (KB/s) : min=    2, max=266240, per=59.04%, avg=234162.53, 
stdev=63283.64
  cpu          : usr=3.42%, sys=16.70%, ctx=8326, majf=0, minf=28
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 
>=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
>=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
>=64=0.0%
     issued r/w/d: total=261826/0/0, short=0/0/0
     lat (usec): 2=28.95%, 4=46.75%, 10=19.20%, 20=1.80%, 50=0.17%
     lat (usec): 100=0.11%, 250=0.05%, 500=0.05%, 750=0.24%, 1000=2.44%
     lat (msec): 2=0.15%, 4=0.08%, 10=0.01%, 20=0.01%, 100=0.01%

Run status group 0 (all jobs):
  WRITE: io=2048.0MB, aggrb=16970KB/s, minb=17378KB/s, maxb=17378KB/s, 
mint=123573msec, maxt=123573msec

Run status group 1 (all jobs):
  WRITE: io=2048.0MB, aggrb=49430KB/s, minb=50617KB/s, maxb=50617KB/s, 
mint=42426msec, maxt=42426msec

Run status group 2 (all jobs):
   READ: io=4096.0MB, aggrb=396624KB/s, minb=203071KB/s, maxb=241616KB/s, 
mint=8888msec, maxt=10575msec

Disk stats (read/write):
  dm-2: ios=577687/390944, merge=0/0, ticks=141180/6046100, 
in_queue=6187964, util=76.63%, aggrios=577469/390258, aggrmerge=216/761, 
aggrticks=140576/6004336, aggrin_queue=6144016, aggrutil=76.38%
    sda: ios=577469/390258, merge=216/761, ticks=140576/6004336, 
in_queue=6144016, util=76.38%

Which looks quite fine, I believe ;). I didn´t yet run this test on a 
harddisk.

Thanks,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux