Re: [RESEND v4 4/6] block: alloc map and request for new hardware queue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Apr 28, 2020 at 09:29:59PM +0800, Weiping Zhang wrote:
> Alloc new map and request for new hardware queue when increse
> hardware queue count. Before this patch, it will show a
> warning for each new hardware queue, but it's not enough, these
> hctx have no maps and reqeust, when a bio was mapped to these
> hardware queue, it will trigger kernel panic when get request
> from these hctx.
> 
> Test environment:
>  * A NVMe disk supports 128 io queues
>  * 96 cpus in system
> 
> A corner case can always trigger this panic, there are 96
> io queues allocated for HCTX_TYPE_DEFAULT type, the corresponding kernel
> log: nvme nvme0: 96/0/0 default/read/poll queues. Now we set nvme write
> queues to 96, then nvme will alloc others(32) queues for read, but
> blk_mq_update_nr_hw_queues does not alloc map and request for these new
> added io queues. So when process read nvme disk, it will trigger kernel
> panic when get request from these hardware context.
> 

map and request is supposed to be allocated via __blk_mq_alloc_rq_map() called
from blk_mq_map_swqueue(), so why is such issue triggered?

The reason could be that only queue type of HCTX_TYPE_DEFAULT is handled
in __blk_mq_alloc_rq_map(), and looks all queue types should have been covered
here.

Could you test the following patch and see if the issue can be fixed?
Then we can save one intermediate variable if it helps.


diff --git a/block/blk-mq.c b/block/blk-mq.c
index 12dee4ecd5cc..7e77e27b613e 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2742,19 +2742,6 @@ static void blk_mq_map_swqueue(struct request_queue *q)
 	 * If the cpu isn't present, the cpu is mapped to first hctx.
 	 */
 	for_each_possible_cpu(i) {
-		hctx_idx = set->map[HCTX_TYPE_DEFAULT].mq_map[i];
-		/* unmapped hw queue can be remapped after CPU topo changed */
-		if (!set->tags[hctx_idx] &&
-		    !__blk_mq_alloc_rq_map(set, hctx_idx)) {
-			/*
-			 * If tags initialization fail for some hctx,
-			 * that hctx won't be brought online.  In this
-			 * case, remap the current ctx to hctx[0] which
-			 * is guaranteed to always have tags allocated
-			 */
-			set->map[HCTX_TYPE_DEFAULT].mq_map[i] = 0;
-		}
-
 		ctx = per_cpu_ptr(q->queue_ctx, i);
 		for (j = 0; j < set->nr_maps; j++) {
 			if (!set->map[j].nr_queues) {
@@ -2763,6 +2750,19 @@ static void blk_mq_map_swqueue(struct request_queue *q)
 				continue;
 			}
 
+			/* unmapped hw queue can be remapped after CPU topo changed */
+			hctx_idx = set->map[j].mq_map[i];
+			if (!set->tags[hctx_idx] &&
+			    !__blk_mq_alloc_rq_map(set, hctx_idx)) {
+				/*
+				 * If tags initialization fail for some hctx,
+				 * that hctx won't be brought online.  In this
+				 * case, remap the current ctx to hctx[0] which
+				 * is guaranteed to always have tags allocated
+				 */
+				set->map[j].mq_map[i] = 0;
+			}
+
 			hctx = blk_mq_map_queue_type(q, j, i);
 			ctx->hctxs[j] = hctx;
 			/*


Thanks,
Ming




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux