After reinstalling the upstream kernel, I also hit the reset issue, so the issue isn't related to mingl's patch. The reset issue has opened before, please have a look. https://lore.kernel.org/linux-scsi/CAGS2=YrmwbhMpNA2REnBybvm5dehGRyKBX5Sq5BqY=ex=mwaUg@xxxxxxxxxxxxxx/ Ming Lei <ming.lei@xxxxxxxxxx> 于2023年5月12日周五 23:06写道: > > Passthrough(pt) request shouldn't be queued to scheduler, especially some > schedulers(such as bfq) supposes that req->bio is always available and > blk-cgroup can be retrieved via bio. > > Sometimes pt request could be part of error handling, so it is better to always > queue it into hctx->dispatch directly. > > Fix this issue by queuing pt request from plug list to hctx->dispatch > directly. > > Reported-by: Guangwu Zhang <guazhang@xxxxxxxxxx> > Investigated-by: Yu Kuai <yukuai1@xxxxxxxxxxxxxxx> > Fixes: 1c2d2fff6dc0 ("block: wire-up support for passthrough plugging") > Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx> > --- > Guang Wu, please test this patch and provide us the result. > > block/blk-mq.c | 12 ++++++++++-- > 1 file changed, 10 insertions(+), 2 deletions(-) > > diff --git a/block/blk-mq.c b/block/blk-mq.c > index f6dad0886a2f..11efaefa26c3 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -2711,6 +2711,7 @@ static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched) > struct request *requeue_list = NULL; > struct request **requeue_lastp = &requeue_list; > unsigned int depth = 0; > + bool pt = false; > LIST_HEAD(list); > > do { > @@ -2719,7 +2720,9 @@ static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched) > if (!this_hctx) { > this_hctx = rq->mq_hctx; > this_ctx = rq->mq_ctx; > - } else if (this_hctx != rq->mq_hctx || this_ctx != rq->mq_ctx) { > + pt = blk_rq_is_passthrough(rq); > + } else if (this_hctx != rq->mq_hctx || this_ctx != rq->mq_ctx || > + pt != blk_rq_is_passthrough(rq)) { > rq_list_add_tail(&requeue_lastp, rq); > continue; > } > @@ -2731,10 +2734,15 @@ static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched) > trace_block_unplug(this_hctx->queue, depth, !from_sched); > > percpu_ref_get(&this_hctx->queue->q_usage_counter); > - if (this_hctx->queue->elevator) { > + if (this_hctx->queue->elevator && !pt) { > this_hctx->queue->elevator->type->ops.insert_requests(this_hctx, > &list, 0); > blk_mq_run_hw_queue(this_hctx, from_sched); > + } else if (pt) { > + spin_lock(&this_hctx->lock); > + list_splice_tail_init(&list, &this_hctx->dispatch); > + spin_unlock(&this_hctx->lock); > + blk_mq_run_hw_queue(this_hctx, from_sched); > } else { > blk_mq_insert_requests(this_hctx, this_ctx, &list, from_sched); > } > -- > 2.38.1 >