AFAIK fio provides, lat, slat and clat but not the device service time. Is there a way to make it report this or do I have to use iostat? For example, below with a lot of jobs and high iodepth we see fio reports avg lat/clat of around 6.3ms, which I assume includes OS block device queuing. However, iostat shows the device in question is servicing IO at a rate of 0.3ms (r_await). # $(iostat -xdty -p nvme1c1n1 2 5 > iostat.out &) && /scripts/fio/fio-3.38/fio --ioengine=posixaio --name test --group_reporting --per_job_logs=0 --time_based=1 --runtime=10 --numjobs=32 --iodepth=128 --filename =/dev/nvme1n1 test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=posixaio, iodepth=128 ... fio-3.38 Starting 32 processes Jobs: 32 (f=32): [R(32)][100.0%][r=1841MiB/s][r=471k IOPS][eta 00m:00s] test: (groupid=0, jobs=32): err= 0: pid=2014545: Thu Mar 13 11:08:39 2025 read: IOPS=645k, BW=2518MiB/s (2640MB/s)(24.6GiB/10006msec) slat (nsec): min=60, max=318733, avg=392.27, stdev=1055.24 clat (usec): min=38, max=16833, avg=6311.17, stdev=3806.39 lat (usec): min=38, max=16833, avg=6311.57, stdev=3806.29 clat percentiles (usec): | 1.00th=[ 660], 5.00th=[ 840], 10.00th=[ 1172], 20.00th=[ 1582], | 30.00th=[ 2671], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7635], | 70.00th=[ 7701], 80.00th=[ 7832], 90.00th=[ 8848], 95.00th=[14353], | 99.00th=[14877], 99.50th=[15008], 99.90th=[15139], 99.95th=[15270], | 99.99th=[15401] bw ( MiB/s): min= 1562, max=10445, per=100.00%, avg=2545.01, stdev=66.08, samples=620 iops : min=400122, max=2673946, avg=651523.08, stdev=16917.29, samples=620 lat (usec) : 50=0.01%, 100=0.10%, 250=0.31%, 500=0.13%, 750=2.37% lat (usec) : 1000=4.37% lat (msec) : 2=18.96%, 4=5.33%, 10=58.84%, 20=9.58% cpu : usr=2.34%, sys=0.16%, ctx=346449, majf=2, minf=2195 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=2.6%, 16=11.5%, 32=24.8%, >=64=61.1% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=98.8%, 8=0.3%, 16=0.2%, 32=0.1%, 64=0.1%, >=64=0.7% issued rwts: total=6449569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=128 Run status group 0 (all jobs): READ: bw=2518MiB/s (2640MB/s), 2518MiB/s-2518MiB/s (2640MB/s-2640MB/s), io=24.6GiB (26.4GB), run=10006-10006msec Disk stats (read/write): nvme1n1: ios=0/0, sectors=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% # cat iostat.out Linux 5.14.0-284.11.1.el9_2.x86_64 (hlg201) 03/13/2025 _x86_64_ (112 CPU) 03/13/2025 11:08:30 AM Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util nvme1c1n1 997.00 203566.00 0.00 0.00 0.34 204.18 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.34 50.55 03/13/2025 11:08:32 AM Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util nvme1c1n1 235.50 53304.00 0.00 0.00 0.37 226.34 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.09 29.60 03/13/2025 11:08:34 AM Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util nvme1c1n1 291.50 64400.00 0.00 0.00 0.34 220.93 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10 34.90 03/13/2025 11:08:36 AM Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util nvme1c1n1 291.50 59320.00 0.00 0.00 0.33 203.50 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10 32.60 03/13/2025 11:08:38 AM Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util nvme1c1n1 285.50 60232.00 0.00 0.00 0.33 210.97 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10 32.60 Regards Mark.