It's unclear that number_ios is supposed to be per loop and it's actually challenging to get number_ios per loop behaviour in a way that is consistent and will also work for seperate job verification. I started down that route originally and then realised verification of loops where you don't exactly everything is problematic. As it stood what seemed to be happening was the number_ios were being increased by total loops but a loop wasn't actually being ended when that loop's numerios were exceeded. For example, you might have expected a sequential write with to just keep rewriting the start the of the file when using number_ios and loops greater than 1 but this was not the case - it would carry on from where it left off on the second loop until it reached the end of the file and only wrap then. Quick question: what are you expecting number_ios coupled with loops>=2 to do when it's per loop? Bear in mind that unlike io_size, number_ios is documented as not extending jobs... On 26 March 2018 at 17:14, Kris Davis <Kris.Davis@xxxxxxx> wrote: > Sitsofe, > >> Is this a bad thing? I was aiming for that behaviour (for number_ios to behave per job rather than per loop)... > > Oh, I didn't catch that. I was assuming that number_ios was analogous to io_size. That is, indicating what each "loop" would do. Wouldn't you be changing the current behavior? My prior test with number_ios and loops was taking about 30 seconds as expected. > > Thanks > > Kris Davis > Western Digital Coporation > Email: kris.davis@xxxxxxx > Office:: +1-507-322-2376 > > -----Original Message----- > From: Sitsofe Wheeler [mailto:sitsofe@xxxxxxxxx] > Sent: Monday, March 26, 2018 11:01 AM > To: Kris Davis <Kris.Davis@xxxxxxx> > Cc: fio@xxxxxxxxxxxxxxx; Itay Ben Yaacov <Itay.BenYaacov@xxxxxxx> > Subject: Re: "No I/O performed by <engine>" reporting bug? > > Hi Kris, > > On 26 March 2018 at 16:49, Kris Davis <Kris.Davis@xxxxxxx> wrote: >> >> Thanks. I gave it a try and no longer see the error message. However, it doesn't appear that the loop count is being used any longer when number_ios option is set. The following runs is less about a second: >> >> $ fio --ioengine=libaio --loops=32 --direct=1 --numjobs=1 >> --norandommap --randrepeat=0 --size=16GB --filename=/dev/sdb >> --name=Random-read-4K-QD1 --rw=randread --bs=4K --iodepth=1 >> --number_ios=8192 >> Random-read-4K-QD1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) >> 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 >> fio-3.5 >> Starting 1 process >> Jobs: 1 (f=1) >> Random-read-4K-QD1: (groupid=0, jobs=1): err= 0: pid=31578: Mon Mar 26 10:37:06 2018 >> read: IOPS=7433, BW=29.0MiB/s (30.4MB/s)(32.0MiB/1102msec) >> slat (nsec): min=4564, max=43459, avg=6011.65, stdev=662.43 ... >> Run status group 0 (all jobs): >> READ: bw=29.0MiB/s (30.4MB/s), 29.0MiB/s-29.0MiB/s >> (30.4MB/s-30.4MB/s), io=32.0MiB (33.6MB), run=1102-1102msec > > Is this a bad thing? I was aiming for that behaviour (for number_ios to behave per job rather than per loop)... > >> But, if I use io_size=32MB, it does actually run for about 30 seconds as expected: >> >> $ fio --ioengine=libaio --loops=32 --direct=1 --numjobs=1 >> --norandommap --randrepeat=0 --size=16GB --filename=/dev/sdb >> --name=Random-read-4K-QD1 --rw=randread --bs=4K --iodepth=1 >> --io_size=32MB >> Random-read-4K-QD1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) >> 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 >> fio-3.5 >> Starting 1 process >> Random-read-4K-QD1: No I/O performed by libaio, perhaps try --debug=io option for details? > > ^^^ Have you found another problem here? > >> Random-read-4K-QD1: (groupid=0, jobs=1): err= 0: pid=31998: Mon Mar 26 10:46:45 2018 >> read: IOPS=7790, BW=30.4MiB/s (31.9MB/s)(1024MiB/33650msec) >> slat (nsec): min=4443, max=43457, avg=4831.99, stdev=286.39 ... >> Run status group 0 (all jobs): >> READ: bw=30.4MiB/s (31.9MB/s), 30.4MiB/s-30.4MiB/s >> (31.9MB/s-31.9MB/s), io=1024MiB (1074MB), run=33650-33650msec >> >> Disk stats (read/write): >> sdb: ios=262130/0, merge=0/0, ticks=31453/0, in_queue=31396, >> util=93.06% > > -- > Sitsofe | http://sucs.org/~sits/ -- Sitsofe | http://sucs.org/~sits/ -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html