RE: "No I/O performed by <engine>" reporting bug?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sitsofe, 

I guess I'm was expecting that loops just means to repeat whatever the job was doing previously that number of times.  I guess that what I conclude from the "loops" description: "Run the specified number of iterations of this job. Used to repeat the same workload a given number of times. Defaults to 1."  

I was expecting that each loop would start sequential operations back at 0 or the offset, and redo any start_delay or ramp_time each time.  If that is not the case, I'm either missing something in the doc, or it could probably use further clarification.  If io_size is used rather than number_ios - does the starting LBA get reset after each loop?

I don't really have any strong opinion on how number_ios works (we can use io_size instead), it just seems like it was working differently before, and otherwise could just use further clarification. Do you know of any other "quantity" type of options would operate independent of the loops option?  (I think this is the first time I've seen loop used).

Thanks

Kris Davis

-----Original Message-----
From: Sitsofe Wheeler [mailto:sitsofe@xxxxxxxxx] 
Sent: Monday, March 26, 2018 12:27 PM
To: Kris Davis <Kris.Davis@xxxxxxx>
Cc: fio@xxxxxxxxxxxxxxx; Itay Ben Yaacov <Itay.BenYaacov@xxxxxxx>
Subject: Re: "No I/O performed by <engine>" reporting bug?

It's unclear that number_ios is supposed to be per loop and it's actually challenging to get number_ios per loop behaviour in a way that is consistent and will also work for seperate job verification. I started down that route originally and then realised verification of loops where you don't exactly everything is problematic. As it stood what seemed to be happening was the number_ios were being increased by total loops but a loop wasn't actually being ended when that loop's numerios were exceeded. For example, you might have expected a sequential write with to just keep rewriting the start the of the file when using number_ios and loops greater than 1 but this was not the case - it would carry on from where it left off on the second loop until it reached the end of the file and only wrap then.

Quick question: what are you expecting number_ios coupled with
loops>=2 to do when it's per loop? Bear in mind that unlike io_size,
number_ios is documented as not extending jobs...

On 26 March 2018 at 17:14, Kris Davis <Kris.Davis@xxxxxxx> wrote:
> Sitsofe,
>
>> Is this a bad thing? I was aiming for that behaviour (for number_ios to behave per job rather than per loop)...
>
> Oh, I didn't catch that.  I was assuming that number_ios was analogous to io_size.  That is, indicating what each "loop" would do.  Wouldn't you be changing the current behavior?  My prior test with number_ios and loops was taking about 30 seconds as expected.
>
> Thanks
>
> Kris Davis
> Western Digital Coporation
> Email: kris.davis@xxxxxxx
> Office:: +1-507-322-2376
>
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@xxxxxxxxx]
> Sent: Monday, March 26, 2018 11:01 AM
> To: Kris Davis <Kris.Davis@xxxxxxx>
> Cc: fio@xxxxxxxxxxxxxxx; Itay Ben Yaacov <Itay.BenYaacov@xxxxxxx>
> Subject: Re: "No I/O performed by <engine>" reporting bug?
>
> Hi Kris,
>
> On 26 March 2018 at 16:49, Kris Davis <Kris.Davis@xxxxxxx> wrote:
>>
>> Thanks.   I gave it a try and no longer see the error message.   However, it doesn't appear that the loop count is being used any longer when number_ios option is set.  The following runs is less about a second:
>>
>> $ fio --ioengine=libaio --loops=32 --direct=1 --numjobs=1 
>> --norandommap --randrepeat=0 --size=16GB --filename=/dev/sdb
>> --name=Random-read-4K-QD1 --rw=randread --bs=4K --iodepth=1
>> --number_ios=8192
>> Random-read-4K-QD1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 
>> 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
>> fio-3.5
>> Starting 1 process
>> Jobs: 1 (f=1)
>> Random-read-4K-QD1: (groupid=0, jobs=1): err= 0: pid=31578: Mon Mar 26 10:37:06 2018
>>    read: IOPS=7433, BW=29.0MiB/s (30.4MB/s)(32.0MiB/1102msec)
>>     slat (nsec): min=4564, max=43459, avg=6011.65, stdev=662.43 ...
>> Run status group 0 (all jobs):
>>    READ: bw=29.0MiB/s (30.4MB/s), 29.0MiB/s-29.0MiB/s 
>> (30.4MB/s-30.4MB/s), io=32.0MiB (33.6MB), run=1102-1102msec
>
> Is this a bad thing? I was aiming for that behaviour (for number_ios to behave per job rather than per loop)...
>
>> But, if I use io_size=32MB, it does actually run for about 30 seconds as expected:
>>
>> $  fio --ioengine=libaio --loops=32 --direct=1 --numjobs=1 
>> --norandommap --randrepeat=0 --size=16GB --filename=/dev/sdb
>> --name=Random-read-4K-QD1 --rw=randread --bs=4K --iodepth=1 
>> --io_size=32MB
>> Random-read-4K-QD1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 
>> 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
>> fio-3.5
>> Starting 1 process
>> Random-read-4K-QD1: No I/O performed by libaio, perhaps try --debug=io option for details?
>
> ^^^ Have you found another problem here?
>
>> Random-read-4K-QD1: (groupid=0, jobs=1): err= 0: pid=31998: Mon Mar 26 10:46:45 2018
>>    read: IOPS=7790, BW=30.4MiB/s (31.9MB/s)(1024MiB/33650msec)
>>     slat (nsec): min=4443, max=43457, avg=4831.99, stdev=286.39 ...
>> Run status group 0 (all jobs):
>>    READ: bw=30.4MiB/s (31.9MB/s), 30.4MiB/s-30.4MiB/s 
>> (31.9MB/s-31.9MB/s), io=1024MiB (1074MB), run=33650-33650msec
>>
>> Disk stats (read/write):
>>   sdb: ios=262130/0, merge=0/0, ticks=31453/0, in_queue=31396, 
>> util=93.06%
>
> --
> Sitsofe | http://sucs.org/~sits/



--
Sitsofe | http://sucs.org/~sits/
��.n��������+%������w��{.n�������^n�r������&��z�ޗ�zf���h���~����������_��+v���)ߣ�

[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux