On Fri, 22 Feb 2019 at 22:33, Jeff Moyer <jmoyer@xxxxxxxxxx> wrote: > > Sitsofe Wheeler <sitsofe@xxxxxxxxx> writes: > > > Hi, > > > > On Fri, 22 Feb 2019 at 16:19, Jing Booth <Jing.Booth@xxxxxxx> wrote: > >> > >> Hi all, > >> > >> In the test script shown below, it has [global] section and > >> [nvme0n1] section. The test script is used to test a storage > >> device. What queue depth does the storage device receive? Thanks > >> > >> fio --name=global --ba=16K --bs=16K --buffered=0 --ioengine=libaio > >> --nice=-10 --rw=randrw --size=100% --status-interval=1 --ramp_time=2 > >> --runtime=5 --time_based --rwmixread=25 --group_reporting > >> --numjobs=4 --iodepth=16 --norandommap > >> --random_generator=tausworthe64 --randrepeat=0 > >> --percentile_list=99:99.9:99.99:99.999:99.9999:99.99999:99.999999:99.9999999 > >> --name=nvme0n1 --filename=/dev/nvme0n1 --iodepth=64 > > > > Something like 4 if you count all jobs together (because numjobs is 4) > > but you can check for yourself by looking at the submit line in the IO > > depths output that fio produces. > > > > Did you see the warnings with the libaio engine in the > > manual/documentation > > (https://fio.readthedocs.io/en/latest/fio_doc.html#i-o-engine ) or the > > warning in the iodepth option section > > (https://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-iodepth > > )? > > He specified buffered=0. The end result will by 64 * 4 = 256. Of > course, at the device things may differ due to splitting and/or merging > of I/O in the kernel. I stand corrected (I did a quick search for direct=1 in the line posted and missed buffered=0), thanks Jeff! I would still recommend looking at what fio reports back and you can also use external tools like iostat to see what the lower level are reporting too. -- Sitsofe | http://sucs.org/~sits/