Ensure that the request_queue is refcounted during its full ioctl cycle. This avoids possible races against removal, given blk_get_queue() also checks to ensure the queue is not dying. This small race is possible if you defer removal of the request_queue and userspace fires off an ioctl for the device in the meantime. Cc: Bart Van Assche <bvanassche@xxxxxxx> Cc: Omar Sandoval <osandov@xxxxxx> Cc: Hannes Reinecke <hare@xxxxxxxx> Cc: Nicolai Stange <nstange@xxxxxxx> Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: yu kuai <yukuai3@xxxxxxxxxx> Reviewed-by: Bart Van Assche <bvanassche@xxxxxxx> Signed-off-by: Luis Chamberlain <mcgrof@xxxxxxxxxx> --- kernel/trace/blktrace.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c index 15086227592f..17e144d15779 100644 --- a/kernel/trace/blktrace.c +++ b/kernel/trace/blktrace.c @@ -701,6 +701,9 @@ int blk_trace_ioctl(struct block_device *bdev, unsigned cmd, char __user *arg) if (!q) return -ENXIO; + if (!blk_get_queue(q)) + return -ENXIO; + mutex_lock(&q->blk_trace_mutex); switch (cmd) { @@ -729,6 +732,9 @@ int blk_trace_ioctl(struct block_device *bdev, unsigned cmd, char __user *arg) } mutex_unlock(&q->blk_trace_mutex); + + blk_put_queue(q); + return ret; } -- 2.25.1