Hi, jens
Can you please consider to apply this patch?
Thanks
Kuai
On 2021/08/08 15:03, Yu Kuai wrote:
flush_end_io() currently decrement request refcount unconditionally.
However, it's possible that the request is already idle and it's
refcount is zero since that flush_end_io() can be called concurrently.
For example, nbd_clear_que() can be called concurrently with normal
io completion or io timeout.
Thus check idle before decrement to avoid refcount_t underflow
warning.
Signed-off-by: Yu Kuai <yukuai3@xxxxxxxxxx>
---
block/blk-flush.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 1002f6c58181..9b65dc43702c 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -222,7 +222,8 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error)
/* release the tag's ownership to the req cloned from */
spin_lock_irqsave(&fq->mq_flush_lock, flags);
- if (!refcount_dec_and_test(&flush_rq->ref)) {
+ if (blk_mq_rq_state(flush_rq) == MQ_RQ_IDLE ||
+ !refcount_dec_and_test(&flush_rq->ref)) {
fq->rq_status = error;
spin_unlock_irqrestore(&fq->mq_flush_lock, flags);
return;