Hi,
在 2022/12/07 11:15, Ming Lei 写道:
On Wed, Dec 07, 2022 at 10:19:08AM +0800, Yu Kuai wrote:
Hi,
在 2022/12/07 2:15, Gulam Mohamed 写道:
Use ktime to change the granularity of IO accounting in block layer from
milli-seconds to nano-seconds to get the proper latency values for the
devices whose latency is in micro-seconds. After changing the granularity
to nano-seconds the iostat command, which was showing incorrect values for
%util, is now showing correct values.
This patch didn't correct the counting of io_ticks, just make the
error accounting from jiffies(ms) to ns. The problem that util can be
smaller or larger still exist.
Agree.
However, I think this change make sense consider that error margin is
much smaller, and performance overhead should be minimum.
Hi, Ming, how do you think?
I remembered that ktime_get() has non-negligible overhead, is there any
test data(iops/cpu utilization) when running fio or t/io_uring on
null_blk with this patch?
Yes, testing with null_blk is necessary, we don't want any performance
regression.
BTW, I thought it's fine because it's already used for tracking io
latency.
Thanks,
Ming
.