From: Yu Kuai <yukuai3@xxxxxxxxxx> Recursive spin_lock/unlock_irq() is not safe, because spin_unlock_irq() will enable irq unconditionally: spin_lock_irq queue_lock -> disable irq spin_lock_irq ioc->lock spin_unlock_irq ioc->lock -> enable irq /* * AA dead lock will be triggered if current context is preempted by irq, * and irq try to hold queue_lock again. */ spin_unlock_irq queue_lock Fix this problem by using spin_lock/unlock() directly for 'ioc->lock'. Fixes: 5a0ac57c48aa ("blk-ioc: protect ioc_destroy_icq() by 'queue_lock'") Signed-off-by: Yu Kuai <yukuai3@xxxxxxxxxx> --- block/blk-ioc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/block/blk-ioc.c b/block/blk-ioc.c index d5db92e62c43..25dd4db11121 100644 --- a/block/blk-ioc.c +++ b/block/blk-ioc.c @@ -179,9 +179,9 @@ void ioc_clear_queue(struct request_queue *q) * Other context won't hold ioc lock to wait for queue_lock, see * details in ioc_release_fn(). */ - spin_lock_irq(&icq->ioc->lock); + spin_lock(&icq->ioc->lock); ioc_destroy_icq(icq); - spin_unlock_irq(&icq->ioc->lock); + spin_unlock(&icq->ioc->lock); } spin_unlock_irq(&q->queue_lock); } -- 2.39.2