> On 27 Sep 2019, at 09.24, Ming Lei <ming.lei@xxxxxxxxxx> wrote: > > Now in case of real MQ, io scheduler may be bypassed, and not only this > way may hurt performance for some slow MQ device, but also break zoned > device which depends on mq-deadline for respecting the write order in > one zone. > > So don't bypass io scheduler if we have one setup. > > This patch can double sequential write performance basically on MQ > scsi_debug when mq-deadline is applied. > > Cc: Bart Van Assche <bvanassche@xxxxxxx> > Cc: Hannes Reinecke <hare@xxxxxxxx> > Cc: Damien Le Moal <damien.lemoal@xxxxxxx> > Cc: Dave Chinner <dchinner@xxxxxxxxxx> > Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx> > --- > block/blk-mq.c | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/block/blk-mq.c b/block/blk-mq.c > index 20a49be536b5..d7aed6518e62 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -2003,6 +2003,8 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) > } > > blk_add_rq_to_plug(plug, rq); > + } else if (q->elevator) { > + blk_mq_sched_insert_request(rq, false, true, true); > } else if (plug && !blk_queue_nomerges(q)) { > /* > * We do limited plugging. If the bio can be merged, do that. > @@ -2026,8 +2028,8 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) > blk_mq_try_issue_directly(data.hctx, same_queue_rq, > &cookie); > } > - } else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator && > - !data.hctx->dispatch_busy)) { > + } else if ((q->nr_hw_queues > 1 && is_sync) || > + !data.hctx->dispatch_busy) { > blk_mq_try_issue_directly(data.hctx, rq, &cookie); > } else { > blk_mq_sched_insert_request(rq, false, true, true); > -- > 2.20.1 Looks good to me. Fixes a couple issues we have seen with zoned devices too. Reviewed-by: Javier González <javier@xxxxxxxxxxx>
Attachment:
signature.asc
Description: Message signed with OpenPGP