Re: Measuring IOPS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am Freitag, 29. Juli 2011 schrieb Martin Steigerwald:
> Hi!
> 
> I am currently writing an article about fio for a german print magazine
> after having packaged it for Debian and using it in performance
> analysis & tuning trainings.
> 
> After introducting into the concepts of fio with some basic job files
> I´d like how to do meaningful IOPS measurements that also work with
> SSDs that compress.
> 
> For some first tests I came up with:
> 
> martin@merkaba:~[…]> cat iops.job
> [global]
> size=2G
> bsrange=2-16k
> filename=iops1
> numjobs=1
> iodepth=1
> # Zufällige Daten für SSDs, die komprimieren
> refill_buffers=1
> 
> [zufälligschreiben]
> rw=randwrite
> stonewall
> [sequentiellschreiben]
> rw=write
> stonewall
> 
> [zufälliglesen]
> rw=randread
> stonewall
> [sequentielllesen]
> rw=read
> 
> (small german dictionary:
> - zufällig => random
> - lesen => read
> - schreiben => write;)
[...]
> Do you think the above job file could give realistic results? Any
> suggestions?
> 
> 
> I got these results:

With a simpler read job I have different results that puzzle me:

martin@merkaba:~/Artikel/LinuxNewMedia/fio/Recherche/fio> cat zweierlei-
lesen-2gb-variable-blockgrößen.job
[global]
rw=randread
size=2g
bsrange=2-16k

[zufälliglesen]
stonewall
[sequentielllesen]
rw=read

martin@merkaba:~[...]> ./fio zweierlei-lesen-2gb-variable-blockgrößen.job
zufälliglesen: (g=0): rw=randread, bs=2-16K/2-16K, ioengine=sync, 
iodepth=1
sequentielllesen: (g=0): rw=read, bs=2-16K/2-16K, ioengine=sync, iodepth=1
fio 1.57
Starting 2 processes
Jobs: 1 (f=1): [r_] [100.0% done] [96146K/0K /s] [88.3K/0  iops] [eta 
00m:00s]  
zufälliglesen: (groupid=0, jobs=1): err= 0: pid=29273
  read : io=2048.0MB, bw=20915KB/s, iops=6389 , runt=100269msec
    clat (usec): min=0 , max=103772 , avg=150.09, stdev=1042.77
     lat (usec): min=0 , max=103772 , avg=150.34, stdev=1042.79
    bw (KB/s) : min=  131, max=112571, per=50.31%, avg=21045.54, 
stdev=13225.53
  cpu          : usr=4.66%, sys=11.24%, ctx=262203, majf=0, minf=26
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 
>=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
>=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
>=64=0.0%
     issued r/w/d: total=640622/0/0, short=0/0/0
     lat (usec): 2=23.94%, 4=26.21%, 10=7.86%, 20=1.39%, 50=0.15%
     lat (usec): 100=0.01%, 250=14.76%, 500=21.53%, 750=3.77%, 1000=0.10%
     lat (msec): 2=0.16%, 4=0.09%, 10=0.01%, 20=0.01%, 50=0.01%
     lat (msec): 100=0.01%, 250=0.01%
sequentielllesen: (groupid=0, jobs=1): err= 0: pid=29274
  read : io=2048.0MB, bw=254108KB/s, iops=31748 , runt=  8253msec
    clat (usec): min=0 , max=4773 , avg=30.44, stdev=173.41
     lat (usec): min=0 , max=4773 , avg=30.54, stdev=173.41
    bw (KB/s) : min=229329, max=265720, per=607.79%, avg=254236.81, 
stdev=8940.36
  cpu          : usr=4.02%, sys=16.97%, ctx=8407, majf=0, minf=28
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 
>=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
>=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
>=64=0.0%
     issued r/w/d: total=262021/0/0, short=0/0/0
     lat (usec): 2=30.07%, 4=46.83%, 10=17.84%, 20=1.91%, 50=0.19%
     lat (usec): 100=0.12%, 250=0.02%, 500=0.02%, 750=0.21%, 1000=2.52%
     lat (msec): 2=0.16%, 4=0.10%, 10=0.01%

Run status group 0 (all jobs):
   READ: io=4096.0MB, aggrb=41830KB/s, minb=21417KB/s, maxb=260206KB/s, 
mint=8253msec, maxt=100269msec

Disk stats (read/write):
  dm-2: ios=267216/204, merge=0/0, ticks=95188/36, in_queue=95240, 
util=80.52%, aggrios=266989/191, aggrmerge=267/175, aggrticks=94712/44, 
aggrin_queue=94312, aggrutil=80.18%
    sda: ios=266989/191, merge=267/175, ticks=94712/44, in_queue=94312, 
util=80.18%


What´s going on here?  Where does the difference between 6389 IOPS for this 
simpler read job file versus 60635 IOPS for the IOPS job file come from? 
These results compared to the results from the IOPS job do not make sense 
to me. Is it just random versus zeros? Which values are more realistic? I 
thought on an SSD random I/O versus sequential I/O should cause such a big 
difference.

Files are laid out as follows:

martin@merkaba:~[…]> sudo filefrag zufälliglesen.1.0 sequentielllesen.2.0 
iops1 
zufälliglesen.1.0: 17 extents found
sequentielllesen.2.0: 17 extents found
iops1: 258 extents found

Not that it should matter much on an SSD.

This is on an ThinkPad T520 with Intel i5 Sandybridge Dual Core, 8 GB of 
RAM and said Intel SSD 320. On Ext4 on LVM with Linux 3.0.0 debian 
package.

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux