Hi all, there had been several attempts to implement a latency-based I/O scheduler for native nvme multipath, all of which had its issues. So time to start afresh, this time using the QoS framework already present in the block layer. It consists of two parts: - a new 'blk-nlatency' QoS module, which is just a simple per-node latency tracker - a 'latency' nvme I/O policy Using the 'tiobench' fio script with 512 byte blocksize I'm getting the following latencies (in usecs) as a baseline: - seq write: avg 186 stddev 331 - rand write: avg 4598 stddev 7903 - seq read: avg 149 stddev 65 - rand read: avg 150 stddev 68 Enabling the 'latency' iopolicy: - seq write: avg 178 stddev 113 - rand write: avg 3427 stddev 6703 - seq read: avg 140 stddev 59 - rand read: avg 141 stddev 58 Setting the 'decay' parameter to 10: - seq write: avg 182 stddev 65 - rand write: avg 2619 stddev 5894 - seq read: avg 142 stddev 57 - rand read: avg 140 stddev 57 That's on a 32G FC testbed running against a brd target, fio running with 48 threads. So promises are met: latency goes down, and we're even able to control the standard deviation via the 'decay' parameter. As usual, comments and reviews are welcome. Changes to the original version: - split the rqos debugfs entries - Modify commit message to indicate latency - rename to blk-nlatency Hannes Reinecke (2): block: track per-node I/O latency nvme: add 'latency' iopolicy block/Kconfig | 6 + block/Makefile | 1 + block/blk-mq-debugfs.c | 2 + block/blk-nlatency.c | 388 ++++++++++++++++++++++++++++++++++ block/blk-rq-qos.h | 6 + drivers/nvme/host/multipath.c | 57 ++++- drivers/nvme/host/nvme.h | 1 + include/linux/blk-mq.h | 11 + 8 files changed, 465 insertions(+), 7 deletions(-) create mode 100644 block/blk-nlatency.c -- 2.35.3