On 5/4/23 22:56, Christoph Hellwig wrote:
On Wed, May 03, 2023 at 03:52:05PM -0700, Bart Van Assche wrote:
blk_mq_free_requests() calls dd_finish_request() indirectly. Prevent
nested locking of dd->lock and dd->zone_lock by unlocking dd->lock
before calling blk_mq_free_requests().
Do you have a reproducer for this that we could wire up in blktests?
Also please add a Fixes tag and move it to the beginning of the series.
Hi Christoph,
I think the nested locking is triggered during every run of blktests.
Additionally, I don't think that nested locking of spinlocks is a bug
so I'm surprised to see a request to add a Fixes: tag?
static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
blk_insert_t flags)
+ __must_hold(dd->lock)
{
struct request_queue *q = hctx->queue;
struct deadline_data *dd = q->elevator->elevator_data;
@@ -784,7 +785,9 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
}
if (blk_mq_sched_try_insert_merge(q, rq, &free)) {
+ spin_unlock(&dd->lock);
blk_mq_free_requests(&free);
+ spin_lock(&dd->lock);
return;
Fiven that free is a list, why don't we declare the free list in
dd_insert_requests and just pass it to dd_insert_request and then do
one single blk_mq_free_requests call after the loop?
That sounds like an interesting approach to me. I will make this change.
Thanks,
Bart.