As a data point, tested V2 and this V3 patch series, applied to my test initiator currently running 5.10 branch of linux-nvme. All results via FIO using the following options: --time_based --runtime=60 --thread --rw=randread --refill_buffers --direct=1 --ioengine=io_uring --hipri --fixedbufs --bs=4k --iodepth=32 --iodepth_batch_complete_min=1 --iodepth_batch_complete_max=32 --iodepth_batch=8 --group_reporting --gtod_reduce=0 --disable_lat=0 , with only the queue depth, batch size, and number of job threads varied as indicated. Tests are directed towards one or more remote nvme optane devices. All data reported as: IOPS (k), Avg Lat (usec), stdev (usec), 99.99 clat (usec) For reference, baseline performance on this branch, running without nvme multipathing enabled, while using 'hipri' polling option: [1 thread, QD 1, Batch 1] 33.1, 29.21, 1.42, 54.52 [1 thread, QD 32, Batch 8] 268, 101.17, 14.87, 139 [16 thread, QD 32, Batch 8] 1965, 247.25, 28.28, 449 This branch with nvme multipathing enabled, V2 of the patch series applied: [1 thread, QD 1, Batch 1] 33, 29.22, 1.56, 54.01 [1 thread, QD 32, Batch 8] 259, 104.38, 15.04, 141 [16 thread, QD32, Batch 8] 1905, 255.52, 30.97, 461 The same config as above for V2, but FIO results when not using 'hipri' polling option: [1 thread, QD 1, Batch 1] 22.9k, 41.66, 3.78, 78.33 [1 thread, QD 32, Batch 8] 224k, 103.88, 28.41, 163 [16 thread, QD32, Batch 8] 1910k, 245.23, 66.30, 502 The same branch but with V3 of the patch series applied. Again using the 'hipri' option: [1 thread, QD 1, Batch 1] 33.2, 29.12, 1.35, 54.53 [1 thread, QD 32, Batch 8] 258, 104.55, 15.01, 141 [16 thread, QD32, Batch 8] 1914, 254.19, 30.00, 457 So the data shows that this patch series clearly enables the use of 'hipri' polling when kernel configured with nvme multipathing, which was not previously supported. There is not a significant difference in performance measured with and without multipathing enabled, with either patch series for bio based polling. And V2 and V3 reporting virtually the same performance as expected. So I tip my hat to this patch series - cheers Tested-by: Mark Wunderlich <mark.wunderlich@xxxxxxxxx>