From: Ming Lei <ming.lei@xxxxxxxxxx> [ Upstream commit b04f50ab8a74129b3041a2836c33c916be3c6667 ] Only attempt to merge bio iff the ctx->rq_list isn't empty, because: 1) for high-performance SSD, most of times dispatch may succeed, then there may be nothing left in ctx->rq_list, so don't try to merge over sw queue if it is empty, then we can save one acquiring of ctx->lock 2) we can't expect good merge performance on per-cpu sw queue, and missing one merge on sw queue won't be a big deal since tasks can be scheduled from one CPU to another. Cc: Laurence Oberman <loberman@xxxxxxxxxx> Cc: Omar Sandoval <osandov@xxxxxx> Cc: Bart Van Assche <bart.vanassche@xxxxxxx> Tested-by: Kashyap Desai <kashyap.desai@xxxxxxxxxxxx> Reported-by: Kashyap Desai <kashyap.desai@xxxxxxxxxxxx> Reviewed-by: Christoph Hellwig <hch@xxxxxx> Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx> Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> Signed-off-by: Sasha Levin <alexander.levin@xxxxxxxxxxxxx> --- block/blk-mq-sched.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index eca011fdfa0e..ae5d8f10a27c 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -236,7 +236,8 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio) return e->type->ops.mq.bio_merge(hctx, bio); } - if (hctx->flags & BLK_MQ_F_SHOULD_MERGE) { + if ((hctx->flags & BLK_MQ_F_SHOULD_MERGE) && + !list_empty_careful(&ctx->rq_list)) { /* default per sw-queue merge */ spin_lock(&ctx->lock); ret = blk_mq_attempt_merge(q, ctx, bio); -- 2.17.1