On Thu, Jul 06, 2023 at 01:14:33PM -0700, Bart Van Assche wrote: > While testing the performance impact of zoned write pipelining, I > noticed that merging happens even if merging has been disabled via > sysfs. Fix this. > > Cc: Christoph Hellwig <hch@xxxxxx> > Cc: Ming Lei <ming.lei@xxxxxxxxxx> > Cc: Damien Le Moal <dlemoal@xxxxxxxxxx> > Signed-off-by: Bart Van Assche <bvanassche@xxxxxxx> > --- > block/blk-mq-sched.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c > index 67c95f31b15b..8883721f419a 100644 > --- a/block/blk-mq-sched.c > +++ b/block/blk-mq-sched.c > @@ -375,7 +375,8 @@ bool blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, > bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq, > struct list_head *free) > { > - return rq_mergeable(rq) && elv_attempt_insert_merge(q, rq, free); > + return !blk_queue_nomerges(q) && rq_mergeable(rq) && > + elv_attempt_insert_merge(q, rq, free); > } > EXPORT_SYMBOL_GPL(blk_mq_sched_try_insert_merge); elv_attempt_insert_merge() does check blk_queue_nomerges() at its entry, so this patch fix nothing. Given blk_mq_sched_try_insert_merge is only called from bfq and deadline, it may not matter to apply this optimization. Thanks, Ming