On Wed 02-06-21 17:25:52, Ming Lei wrote: > On Thu, May 20, 2021 at 01:25:28PM +0200, Jan Kara wrote: > > Provided the device driver does not implement dispatch budget accounting > > (which only SCSI does) the loop in __blk_mq_do_dispatch_sched() pulls > > requests from the IO scheduler as long as it is willing to give out any. > > That defeats scheduling heuristics inside the scheduler by creating > > false impression that the device can take more IO when it in fact > > cannot. > > > > For example with BFQ IO scheduler on top of virtio-blk device setting > > blkio cgroup weight has barely any impact on observed throughput of > > async IO because __blk_mq_do_dispatch_sched() always sucks out all the > > IO queued in BFQ. BFQ first submits IO from higher weight cgroups but > > when that is all dispatched, it will give out IO of lower weight cgroups > > as well. And then we have to wait for all this IO to be dispatched to > > the disk (which means lot of it actually has to complete) before the > > IO scheduler is queried again for dispatching more requests. This > > completely destroys any service differentiation. > > > > So grab request tag for a request pulled out of the IO scheduler already > > in __blk_mq_do_dispatch_sched() and do not pull any more requests if we > > cannot get it because we are unlikely to be able to dispatch it. That > > way only single request is going to wait in the dispatch list for some > > tag to free. > > > > Signed-off-by: Jan Kara <jack@xxxxxxx> > > --- > > block/blk-mq-sched.c | 12 +++++++++++- > > block/blk-mq.c | 2 +- > > block/blk-mq.h | 2 ++ > > 3 files changed, 14 insertions(+), 2 deletions(-) > > > > diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c > > index 996a4b2f73aa..714e678f516a 100644 > > --- a/block/blk-mq-sched.c > > +++ b/block/blk-mq-sched.c > > @@ -168,9 +168,19 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx) > > * in blk_mq_dispatch_rq_list(). > > */ > > list_add_tail(&rq->queuelist, &rq_list); > > + count++; > > if (rq->mq_hctx != hctx) > > multi_hctxs = true; > > - } while (++count < max_dispatch); > > + > > + /* > > + * If we cannot get tag for the request, stop dequeueing > > + * requests from the IO scheduler. We are unlikely to be able > > + * to submit them anyway and it creates false impression for > > + * scheduling heuristics that the device can take more IO. > > + */ > > + if (!blk_mq_get_driver_tag(rq)) > > + break; > > + } while (count < max_dispatch); > > > > if (!count) { > > if (run_queue) > > diff --git a/block/blk-mq.c b/block/blk-mq.c > > index c86c01bfecdb..bc2cf80d2c3b 100644 > > --- a/block/blk-mq.c > > +++ b/block/blk-mq.c > > @@ -1100,7 +1100,7 @@ static bool __blk_mq_get_driver_tag(struct request *rq) > > return true; > > } > > > > -static bool blk_mq_get_driver_tag(struct request *rq) > > +bool blk_mq_get_driver_tag(struct request *rq) > > { > > struct blk_mq_hw_ctx *hctx = rq->mq_hctx; > > > > diff --git a/block/blk-mq.h b/block/blk-mq.h > > index 9ce64bc4a6c8..81a775171be7 100644 > > --- a/block/blk-mq.h > > +++ b/block/blk-mq.h > > @@ -259,6 +259,8 @@ static inline void blk_mq_put_driver_tag(struct request *rq) > > __blk_mq_put_driver_tag(rq->mq_hctx, rq); > > } > > > > +bool blk_mq_get_driver_tag(struct request *rq); > > + > > static inline void blk_mq_clear_mq_map(struct blk_mq_queue_map *qmap) > > { > > int cpu; > > Thinking of further, looks this patch is fine, and it is safe to use driver tag > allocation result to decide if more requests need to be dequeued since run queue > always be followed when breaking from the loop. Also I can observe that > io.bfq.weight is improved on virtio-blk, so > > Reviewed-by: Ming Lei <ming.lei@xxxxxxxxxx> OK, thanks for your review! Honza -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR