Re: fio reporting high value of read iops at iodepth 1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks again for the detailed explanation. It is really helpful.


On Mon, May 25, 2020 at 11:48 AM Debraj Manna <subharaj.manna@xxxxxxxxx> wrote:
>
> Thanks again for the detailed explanation. It is really helpful.
>
> On Sun, May 24, 2020 at 3:15 PM Sitsofe Wheeler <sitsofe@xxxxxxxxx> wrote:
>>
>> On Fri, 22 May 2020 at 17:54, Debraj Manna <subharaj.manna@xxxxxxxxx> wrote:
>> >
>> > One more query
>> >
>> > "maybe check how many I/Os the kernel sent to disk versus those that
>> > fio was asked to do" - For this can I rely on iostat or do you
>> > recommend any other tool?
>>
>> You could do it by manually reading the disk stats collected in
>> /sys/block/<device>/stat just before the I/O starts but then you'd
>> have to somehow know when the actual part of the real job starts
>> rather than when you were doing layout etc. Further, I believe those
>> are the underlying files that iostat may use for monitoring anyway (it
>> can also use /proc/diskstats but you can see what iostat does by
>> tracing through both branches around
>> https://github.com/sysstat/sysstat/blob/b5fd09323bdbd36e34472ee448563f8a7fd34464/iostat.c#L2006
>> )...
>>
>> ...but you already have processed versions statistics in your fio
>> output. For example in your RUN 1 you had this:
>> [...]
>> > bw-test: (groupid=0, jobs=1): err= 0: pid=48575: Fri May 15 18:42:15 2020
>> [...]
>> >   IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
>> [...]
>> >      issued    : total=r=51200/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
>> [...]
>> > Disk stats (read/write):
>> >     dm-6: ios=48310/2, merge=0/0, ticks=3944/0, in_queue=3972,
>> > util=90.95%, aggrios=51200/40, aggrmerge=0/205, aggrticks=4048/0,
>> > aggrin_queue=4048, aggrutil=87.75%
>> >   sda: ios=51200/40, merge=0/205, ticks=4048/0, in_queue=4048, util=87.75%
>>
>> The "Disk stats (read/write)" show what went out of relevant block
>> layer devices while your job was running and is mentioned in the fio
>> documentation (e.g.
>> https://fio.readthedocs.io/en/latest/fio_doc.html#interpreting-the-output
>> ). Be aware that the stats may also cover I/O not generated by your
>> job (e.g. I/O done by other processes that was sent to disk,
>> filesystem metadata I/O etc.) but it's as close as you're going to get
>> without reaching deeper into the kernel. This is why it's important to
>> use as quiet a system as possible to avoid interference/contamination.
>>
>> Looking closer, something interesting in your results is that fio says
>> it issued 51200 read I/Os but the device mapper device (dm-6, which
>> your filesystem is presumably on) apparently only saw 48310 reads AND
>> saw 2 writes (which suggests something else was happening on that
>> device as your fio job wouldn't have been issuing writes at that
>> point). Curiously the underlying block device to your device mapper
>> device (sda) saw 51200 reads and 40 writes. That might be because the
>> time the underlying block stats were read was ever so slightly later
>> than that of when the device mapper stats were read or because some
>> other device mapper device was also sending I/O to sda (also note some
>> merging of writes also took place but no merging of reads did).
>>
>> --
>> Sitsofe | http://sucs.org/~sits/



[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux