On 01/26/2017 02:47 PM, Bart Van Assche wrote: > (gdb) list *(blk_mq_sched_get_request+0x310) > 0xffffffff8132dcf0 is in blk_mq_sched_get_request (block/blk-mq-sched.c:136). > 131 rq->rq_flags |= RQF_QUEUED; > 132 } else > 133 rq = __blk_mq_alloc_request(data, op); > 134 } else { > 135 rq = __blk_mq_alloc_request(data, op); > 136 data->hctx->tags->rqs[rq->tag] = rq; > 137 } > 138 > 139 if (rq) { > 140 if (!op_is_flush(op)) { > > (gdb) disas blk_mq_sched_get_request > [ ... ] > 0xffffffff8132dce3 <+771>: callq 0xffffffff81324ab0 <__blk_mq_alloc_request> > 0xffffffff8132dce8 <+776>: mov %rax,%rcx > 0xffffffff8132dceb <+779>: mov 0x18(%r12),%rax > 0xffffffff8132dcf0 <+784>: movslq 0x5c(%rcx),%rdx > [ ... ] > (gdb) print &((struct request *)0)->tag > $1 = (int *) 0x5c <irq_stack_union+92> > > I think this means that rq == NULL and that a test for rq is missing after the > __blk_mq_alloc_request() call? That is exactly what it means, looks like that one path doesn't handle that. You'd have to exhaust the pool with atomic allocs for this to trigger, we don't do that at all in the normal IO path. So good catch, must be the dm part that enables this since it does NOWAIT allocations. diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 3136696f4991..c27613de80c5 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -134,7 +134,8 @@ struct request *blk_mq_sched_get_request(struct request_queue *q, rq = __blk_mq_alloc_request(data, op); } else { rq = __blk_mq_alloc_request(data, op); - data->hctx->tags->rqs[rq->tag] = rq; + if (rq) + data->hctx->tags->rqs[rq->tag] = rq; } if (rq) { -- Jens Axboe -- To unsubscribe from this list: send the line "unsubscribe linux-block" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html