Re: [PATCH v3] blk-mq: punt failed direct issue to dispatch list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/7/18 9:24 AM, Jens Axboe wrote:
> On 12/7/18 9:19 AM, Bart Van Assche wrote:
>> On Thu, 2018-12-06 at 22:17 -0700, Jens Axboe wrote:
>>> Instead of making special cases for what we can direct issue, and now
>>> having to deal with DM solving the livelock while still retaining a BUSY
>>> condition feedback loop, always just add a request that has been through
>>> ->queue_rq() to the hardware queue dispatch list. These are safe to use
>>> as no merging can take place there. Additionally, if requests do have
>>> prepped data from drivers, we aren't dependent on them not sharing space
>>> in the request structure to safely add them to the IO scheduler lists.
>>
>> How about making blk_mq_sched_insert_request() complain if a request is passed
>> to it in which the RQF_DONTPREP flag has been set to avoid that this problem is
>> reintroduced in the future? Otherwise this patch looks fine to me.
> 
> I agree, but I think we should do that as a follow up patch. I don't want to
> touch this one if we can avoid it. The thought did cross my mind, too. It
> should be impossible now that everything goes to the dispatch list.

Something like the below.


diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 29bfe8017a2d..9e5bda8800f8 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -377,6 +377,16 @@ void blk_mq_sched_insert_request(struct request *rq, bool at_head,
 
 	WARN_ON(e && (rq->tag != -1));
 
+	/*
+	 * It's illegal to insert a request into the scheduler that has
+	 * been through ->queue_rq(). Warn for that case, and use a bypass
+	 * insert to be safe.
+	 */
+	if (WARN_ON_ONCE(rq->rq_flags & RQF_DONTPREP)) {
+		blk_mq_request_bypass_insert(rq, false);
+		goto run;
+	}
+
 	if (blk_mq_sched_bypass_insert(hctx, !!e, rq))
 		goto run;
 
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 6a7566244de3..d5f890d5c814 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1595,15 +1595,25 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
 			    struct list_head *list)
 
 {
-	struct request *rq;
+	struct request *rq, *tmp;
 
 	/*
 	 * preemption doesn't flush plug list, so it's possible ctx->cpu is
 	 * offline now
 	 */
-	list_for_each_entry(rq, list, queuelist) {
+	list_for_each_entry_safe(rq, tmp, list, queuelist) {
 		BUG_ON(rq->mq_ctx != ctx);
 		trace_block_rq_insert(hctx->queue, rq);
+
+		/*
+		 * It's illegal to insert a request into the scheduler that has
+		 * been through ->queue_rq(). Warn for that case, and use a
+		 * bypass insert to be safe.
+		 */
+		if (WARN_ON_ONCE(rq->rq_flags & RQF_DONTPREP)) {
+			list_del_init(&rq->queuelist);
+			blk_mq_request_bypass_insert(rq, false);
+		}
 	}
 
 	spin_lock(&ctx->lock);

-- 
Jens Axboe




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux