Could you remove rate_process=poisson and try, with the same profile without rate_poisson, i am getting a bit less than 100 IOPS expected. # fio -name=rate --filename=/dev/mapper/3600601602d003e0002cde5580011c8fb --ioengine=libaio --direct=1 --fill_device=1 --group_reporting --rw=randwrite --rate_iops=100 --iodepth=8 --norandommap --bsrange=8k-16k -- bssplit=8k/90:16k/10 avg-cpu: %user %nice %system %iowait %steal %idle 0.25 0.00 0.38 1.13 0.00 98.24 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util dm-18 0.00 0.00 0.00 91.00 0.00 800.00 17.58 0.38 4.20 0.00 4.20 3.96 36.00 -- Srinivasa R Chamarthy On Thu, Apr 6, 2017 at 12:09 AM, GV Govindasamy <gv.govindasamy@xxxxxxxxxxx> wrote: > > Hello All > > I would like to know what I observe is a bug or by design that I don't understand. I expect to generate 100 random read ops or 100 random write ops in the following examples with 90% 8k and 10% 16k. FIO also appears to target for 100 IOPS, but ended up doing 2x in actual workload. > > Thanks, > +GV > > ============================================= > $ ./fio-2.19 --version (compiled with: ./configure --build-static; make on CentOS release 6.7/3.10.0-229) > fio-2.19 > > ============================================= > $ cat output/fio/bssplit_readtest.fio > [global] > ioengine=libaio > direct=1 > time_based > norandommap > group_reporting > disk_util=0 > continue_on_error=all > rate_process=poisson > > > [db-oltp-w] > bssplit=8k/90:16k/10,, > size=128G > filename=/dev/sdg > rw=randread > iodepth=8 > rate_iops=100 > > ========================================= > $ iostat -x 2 sdg > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.13 0.00 0.13 1.63 0.00 98.12 > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await r_await w_await svctm %util > sdg 0.00 0.00 191.00 0.00 3392.00 0.00 17.76 0.09 0.45 0.45 0.00 0.41 7.75 > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.25 0.00 0.25 1.75 0.00 97.74 > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await r_await w_await svctm %util > sdg 0.00 0.00 193.00 0.00 3424.00 0.00 17.74 0.08 0.42 0.42 0.00 0.39 7.50 > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.13 0.00 0.25 1.88 0.00 97.74 > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await r_await w_await svctm %util > sdg 0.00 0.00 213.50 0.00 3672.00 0.00 17.20 0.10 0.45 0.45 0.00 0.40 8.45 ========================================= > > $ sudo ./fio-2.19 --runtime 120 --eta-newline=30 output/fio/bssplit_readtest.fio > db-oltp-w: (g=0): rw=randread, bs=(R) 8192B-16.0KiB, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8 > fio-2.19 > Starting 1 process > Jobs: 1 (f=1), 0-100 IOPS: [r(1)][26.4%][r=1632KiB/s,w=0KiB/s][r=194,w=0 IOPS][eta 01m:29s] > Jobs: 1 (f=1), 0-100 IOPS: [r(1)][51.2%][r=1648KiB/s,w=0KiB/s][r=190,w=0 IOPS][eta 00m:59s] > Jobs: 1 (f=1), 0-100 IOPS: [r(1)][76.0%][r=1760KiB/s,w=0KiB/s][r=193,w=0 IOPS][eta 00m:29s] > Jobs: 1 (f=1), 0-100 IOPS: [r(1)][100.0%][r=1816KiB/s,w=0KiB/s][r=202,w=0 IOPS][eta 00m:00s] > db-oltp-w: (groupid=0, jobs=1): err= 0: pid=1605: Wed Apr 5 15:53:00 2017 > read: IOPS=200, BW=1765KiB/s (1808kB/s)(207MiB/120013msec) > slat (usec): min=3, max=117, avg=30.53, stdev=11.20 > clat (usec): min=120, max=8658, avg=413.81, stdev=216.67 > lat (usec): min=139, max=8697, avg=444.34, stdev=218.71 > clat percentiles (usec): > | 1.00th=[ 175], 5.00th=[ 221], 10.00th=[ 251], 20.00th=[ 290], > | 30.00th=[ 338], 40.00th=[ 382], 50.00th=[ 406], 60.00th=[ 434], > | 70.00th=[ 462], 80.00th=[ 498], 90.00th=[ 556], 95.00th=[ 620], > | 99.00th=[ 756], 99.50th=[ 932], 99.90th=[ 3632], 99.95th=[ 4704], > | 99.99th=[ 6368] > lat (usec) : 250=9.84%, 500=70.75%, 750=18.34%, 1000=0.60% > lat (msec) : 2=0.26%, 4=0.12%, 10=0.08% > cpu : usr=0.37%, sys=0.84%, ctx=44140, majf=0, minf=60 > IO depths : 1=91.3%, 2=8.6%, 4=0.1%, 8=0.1%, 16=0.0%, 32=0.0%, >=64=0.0% > submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% > complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% > issued rwt: total=24072,0,0, short=0,0,0, dropped=0,0,0 > errors : total=0, first_error=0/<Success> > latency : target=0, window=0, percentile=100.00%, depth=8 > > Run status group 0 (all jobs): > READ: bw=1765KiB/s (1808kB/s), 1765KiB/s-1765KiB/s (1808kB/s-1808kB/s), io=207MiB (217MB), run=120013-120013msec > ======================================== > > $ cat output/fio/bssplit_writetest.fio > [global] > ioengine=libaio > direct=1 > time_based > norandommap > group_reporting > disk_util=0 > continue_on_error=all > rate_process=poisson > > [db-oltp-w] > bssplit=,8k/90:16k/10, > size=128G > filename=/dev/sdg > rw=randwrite > iodepth=8 > rate_iops=100 > ========================================= > $ iostat -x 2 sdg > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.13 0.00 0.25 4.89 0.00 94.74 > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await r_await w_await svctm %util > sdg 0.00 0.00 0.00 195.00 0.00 3480.00 17.85 0.28 1.43 0.00 1.43 1.06 20.65 > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.13 0.00 0.38 4.38 0.00 95.12 > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await r_await w_await svctm %util > sdg 0.00 0.00 0.00 202.50 0.00 3632.00 17.94 0.24 1.18 0.00 1.18 0.92 18.60 > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.00 0.00 0.25 3.76 0.00 95.98 > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await r_await w_await svctm %util > sdg 0.00 0.00 0.00 179.00 0.00 3072.00 17.16 0.21 1.15 0.00 1.15 0.92 16.45 > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.13 0.00 0.25 4.52 0.00 95.11 > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await r_await w_await svctm %util > sdg 0.00 0.00 0.00 199.50 0.00 3472.00 17.40 0.25 1.24 0.00 1.24 0.98 19.55 > ========================================== > > $ sudo ./fio-2.19 --runtime 120 --eta-newline=30 output/fio/bssplit_writetest.fio > db-oltp-w: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 8192B-16.0KiB, (T) 4096B-4096B, ioengine=libaio, iodepth=8 > fio-2.19 > Starting 1 process > Jobs: 1 (f=1), 0-100 IOPS: [w(1)][26.4%][r=0KiB/s,w=1632KiB/s][r=0,w=194 IOPS][eta 01m:29s] > Jobs: 1 (f=1), 0-100 IOPS: [w(1)][51.2%][r=0KiB/s,w=1648KiB/s][r=0,w=190 IOPS][eta 00m:59s] > Jobs: 1 (f=1), 0-100 IOPS: [w(1)][76.0%][r=0KiB/s,w=1760KiB/s][r=0,w=193 IOPS][eta 00m:29s] > Jobs: 1 (f=1), 0-100 IOPS: [w(1)][100.0%][r=0KiB/s,w=1816KiB/s][r=0,w=202 IOPS][eta 00m:00s] > db-oltp-w: (groupid=0, jobs=1): err= 0: pid=1616: Wed Apr 5 15:57:04 2017 > write: IOPS=200, BW=1765KiB/s (1807kB/s)(207MiB/120014msec) > slat (usec): min=5, max=125, avg=31.91, stdev=12.20 > clat (usec): min=500, max=26437, avg=1170.96, stdev=849.81 > lat (usec): min=516, max=26448, avg=1202.87, stdev=849.55 > clat percentiles (usec): > | 1.00th=[ 644], 5.00th=[ 732], 10.00th=[ 788], 20.00th=[ 868], > | 30.00th=[ 932], 40.00th=[ 996], 50.00th=[ 1048], 60.00th=[ 1128], > | 70.00th=[ 1192], 80.00th=[ 1288], 90.00th=[ 1448], 95.00th=[ 1608], > | 99.00th=[ 4832], 99.50th=[ 5920], 99.90th=[12480], 99.95th=[13760], > | 99.99th=[26496] > lat (usec) : 750=6.15%, 1000=34.93% > lat (msec) : 2=56.35%, 4=1.15%, 10=1.27%, 20=0.11%, 50=0.03% > cpu : usr=0.39%, sys=0.79%, ctx=38562, majf=0, minf=34 > IO depths : 1=80.2%, 2=19.3%, 4=0.4%, 8=0.1%, 16=0.0%, 32=0.0%, >=64=0.0% > submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% > complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% > issued rwt: total=0,24072,0, short=0,0,0, dropped=0,0,0 > errors : total=0, first_error=0/<Success> > latency : target=0, window=0, percentile=100.00%, depth=8 > > Run status group 0 (all jobs): > WRITE: bw=1765KiB/s (1807kB/s), 1765KiB/s-1765KiB/s (1807kB/s-1807kB/s), io=207MiB (217MB), run=120014-120014msec > ==========================================-- > To unsubscribe from this list: send the line "unsubscribe fio" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html