Re: Re: how to understand latency when rate is set?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 19 April 2018 at 15:45, shadow_lin <shadow_lin@xxxxxxx> >
> Hi Sitsofe,
>         Thank you for your insight.
>         I think the lat(along with slat,clat) in fio result means how long it takes to complete the whole io operation.

slat is submission latency alone. clat is completion latency alone.
lat is the total latency from the time it was submitted to the time it
came back completed. See the sections in
http://fio.readthedocs.io/en/latest/fio_doc.html#interpreting-the-output
for descriptions.

> So with 4m block it means how long it takes to submit the request and gets the comfirmation that the 4m block is
>  written into the disk.So I think with higher bandwidth the lat should be lower.Is my understanding correct?Wthout rate

I don't think your understanding is completely correct.

Something to be aware of: when you start using giant blocks sizes your
I/Os may have to be split up due to device (or system) constraints.
Generally speaking disks don't normally accept huge (say bigger than
2MByte) I/Os and if something tries to send them it is up to the
kernel to split them up (generating extra work). Typically there's an
optimal block size that the disk likes best and when you go bigger
than that you often go past the point of diminishing returns. I
mention this because if your 4MByte I/O is split up into 64 x 64KByte
pieces its latency is now the time for ALL the 64KByte pieces to come
back so because they all have to complete. You may also find this
effect means the depth of I/Os being sent down to the disk is
DIFFERENT to what fio is submitting. Take a look at the iostat command
output while your fio is running to see what size and depth of I/Os
the kernel is sending down.

Re throughput and latency: imagine the case where the system only has
one thing to do (one tiny I/O). It could be the case that this is
"easier" to complete than when 100 things have to all be done at once
(e.g. due to the overhead of switching between them all). You might
get a higher throughput doing 100 I/Os at once (all spare resources
are kept occupied) but at the cost that each I/O operation completes a
little bit slower - higher throughput but worse latency. Latency does
not have to go down just because throughput goes up...

-- 
Sitsofe | http://sucs.org/~sits/
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux