On 4/29/19 6:52 PM, Ming Lei wrote: > Just like aio/io_uring, we need to grab 2 refcount for queuing one > request, one is for submission, another is for completion. > > If the request isn't queued from plug code path, the refcount grabbed > in generic_make_request() serves for submission. In theroy, this > refcount should have been released after the sumission(async run queue) > is done. blk_freeze_queue() works with blk_sync_queue() together > for avoiding race between cleanup queue and IO submission, given async > run queue activities are canceled because hctx->run_work is scheduled with > the refcount held, so it is fine to not hold the refcount when > running the run queue work function for dispatch IO. > > However, if request is staggered into plug list, and finally queued > from plug code path, the refcount in submission side is actually missed. > And we may start to run queue after queue is removed because the queue's > kobject refcount isn't guaranteed to be grabbed in flushing plug list > context, then kernel oops is triggered, see the following race: > > blk_mq_flush_plug_list(): > blk_mq_sched_insert_requests() > insert requests to sw queue or scheduler queue > blk_mq_run_hw_queue > > Because of concurrent run queue, all requests inserted above may be > completed before calling the above blk_mq_run_hw_queue. Then queue can > be freed during the above blk_mq_run_hw_queue(). > > Fixes the issue by grab .q_usage_counter before calling > blk_mq_sched_insert_requests() in blk_mq_flush_plug_list(). This way is > safe because the queue is absolutely alive before inserting request. Reviewed-by: Bart Van Assche <bvanassche@xxxxxxx>