Hi, When I run my peak testing to see if we've regressed, my test script always does: echo 0 > /sys/block/$DEV/queue/iostats echo 2 > /sys/block/$DEV/queue/nomerges for each device being used. It's unfortunate that we need to disable iostats, but without doing that, I lose about 12% performance. The main reason for that is the time querying we need to do, when iostats are enabled. As it turns out, lots of other block code is quite trigger happy with querying time as well. We do have some nice batching in place which helps ammortize that, but it's not perfect. This trivial patchset simply caches the current time in struct blk_plug, on the premise that any issue side time querying can get adequate granularity through that. Nobody really needs nsec granularity on the timestamp. Results in patch 2, but tldr is a more than 9% improvement (108M -> 118M IOPS) for my test case, which doesn't even enable most of the costly block layer items that you'd typically find in a distro and which would further increase the number of issue side time calls. This brings iostats enabled _almost_ to the level of turning it off. v2: - Fix typo in cover letter, the prep script obviously turns _off_ iostats normally - Cover rest of block/* cases that use ktime_get_ns() - Fix build error in block/blk-wbt.c - Don't use the LSB to detect if the timestamp is valid or not, just accept we'll do double ktime_get_ns() if we happen to get 0 as a valid time. - Invalidate timestamp on any schedule out condition - Add two patches reclaiming the added space in blk_plug - Update to current perf results -- Jens Axboe