The following behavior is inconsistent: * For request-based dm queues the default value of rq_affinity is 1. * For bio-based dm queues the default value of rq_affinity is 0. The default value for request-based dm queues is 1 because of the following code in blk_mq_init_allocated_queue(): q->queue_flags |= QUEUE_FLAG_MQ_DEFAULT; >From <linux/blkdev.h>: #define QUEUE_FLAG_MQ_DEFAULT ((1UL << QUEUE_FLAG_IO_STAT) | \ (1UL << QUEUE_FLAG_SAME_COMP) | \ (1UL << QUEUE_FLAG_NOWAIT)) The default value of rq_affinity for bio-based dm queues is 0 because the dm alloc_dev() function does not set any of the QUEUE_FLAG_SAME_* flags. I think the different default values are the result of an oversight when blk-mq support was added in the device mapper code. Hence this patch that changes the default value of rq_affinity from 0 to 1 for bio-based dm queues. This patch reduces the boot time from 12.23 to 12.20 seconds on my test setup, a Pixel 2023 development board. The storage controller on that test setup supports a single completion interrupt and hence benefits from redirecting I/O completions to a CPU core that is closer to the submitter. Cc: Mikulas Patocka <mpatocka@xxxxxxxxxx> Cc: Eric Biggers <ebiggers@xxxxxxxxxx> Cc: Jaegeuk Kim <jaegeuk@xxxxxxxxxx> Cc: Daniel Lee <chullee@xxxxxxxxxx> Cc: stable@xxxxxxxxxxxxxxx Fixes: bfebd1cdb497 ("dm: add full blk-mq support to request-based DM") Signed-off-by: Bart Van Assche <bvanassche@xxxxxxx> --- drivers/md/dm.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 56aa2a8b9d71..9af216c11cf7 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -2106,6 +2106,7 @@ static struct mapped_device *alloc_dev(int minor) if (IS_ERR(md->disk)) goto bad; md->queue = md->disk->queue; + blk_queue_flag_set(QUEUE_FLAG_SAME_COMP, md->queue); init_waitqueue_head(&md->wait); INIT_WORK(&md->work, dm_wq_work);