Hi, When I run my peak testing to see if we've regressed, my test script always does: echo 0 > /sys/block/$DEV/queue/iostats echo 2 > /sys/block/$DEV/queue/nomerges for each device being used. It's unfortunate that we need to disable iostats, but without doing that, I lose about 12% performance. The main reason for that is the time querying we need to do, when iostats are enabled. As it turns out, lots of other block code is quite trigger happy with querying time as well. We do have some nice batching in place which helps ammortize that, but it's not perfect. This trivial patchset simply caches the current time in struct blk_plug, on the premise that any issue side time querying can get adequate granularity through that. Nobody really needs nsec granularity on the timestamp. Results in patch 3, but tldr is a more than 9% improvement (108M -> 118M IOPS) for my test case, which doesn't even enable most of the costly block layer items that you'd typically find in a distro and which would further increase the number of issue side time calls. This brings iostats enabled _almost_ to the level of turning it off. Can also be found in my block-issue-ts branch: https://git.kernel.dk/cgit/linux/log/?h=block-issue-ts block/bfq-cgroup.c | 14 +++--- block/bfq-iosched.c | 28 +++++------ block/blk-cgroup.c | 2 +- block/blk-core.c | 33 +++++++------ block/blk-flush.c | 2 +- block/blk-iocost.c | 8 ++-- block/blk-iolatency.c | 6 +-- block/blk-mq.c | 18 ++++---- block/blk-throttle.c | 6 +-- block/blk-wbt.c | 5 +- drivers/md/raid1-10.c | 2 +- include/linux/blk_types.h | 42 ----------------- include/linux/blkdev.h | 97 ++++++++++++++++++++++++++++++++++++--- include/linux/sched.h | 2 +- kernel/sched/core.c | 4 +- 15 files changed, 160 insertions(+), 109 deletions(-) Changes since v3: - Include a ktime_get() variant, and use that to convert the remaining ktime user (BFQ) - Remove RFC label, I think this is ready to go - Rebase on 6.8-rc1 -- Jens Axboe