On 2020-09-15 15:39, Ming Lei wrote:
On Tue, Sep 15, 2020 at 02:41:02PM +0530, Pradeep P V K wrote:
Observes below crash while accessing (use-after-free) lock member
of bfq data.
context#1 context#2
process_one_work()
kthread() blk_mq_run_work_fn()
worker_thread() ->__blk_mq_run_hw_queue()
process_one_work() ->blk_mq_sched_dispatch_requests()
__blk_release_queue() ->blk_mq_do_dispatch_sched()
Just found __blk_release_queue killed in v5.9 cycle.
Yes on v5.9 blk_release_queue() will be called directly by q->kobj when
request_queue ref. goes zero but
where as on older kernel versions (< 5.9), blk_release_queue() will
schedule a work to invoke/call "__blk_release_queue()".
->__elevator_exit()
->blk_mq_exit_sched()
->exit_sched()
->kfree()
->bfq_dispatch_request()
->spin_unlock_irq(&bfqd->lock)
Actually not sure if the above race is easy to trigger in recent
kernel,
because we do call cancel_delayed_work_sync() in
blk_mq_hw_sysfs_release(),
which is usually called before __elevator_exit() from
blk_exit_queue()/blk_release_queue().
blk_mq_hw_sysfs_release() will be called from blk_mq_release() i.e. with
kobject_put(hctx->kobj), which is after __elevator_exit()
__elevator_exit() is called from blk_exit_queue() which is prior to
blk_mq_release().
So can you share your kernel version in which the issue is reproduced?
And can you reproduce this issue on v5.8 or v5.9-rc5?
This issue is seen on v5.4 stable and it is very easy to reproduce on
v5.4.
sorry, i don't have a resource with v5.8 or with latest kernel. I can
help you
to get tested on v5.4. From the issue prospective, both v5.4 kernel and
latest kernels calls blk_mq_release() -> blk_mq_hw_sysfs_release() after
__elevator_exit(). So, i think it wont matter much here.
This is because of the kblockd delayed work that might got scheduled
around blk_release_queue() and accessed use-after-free member of
bfq_data.
240.212359: <2> Unable to handle kernel paging request at
virtual address ffffffee2e33ad70
...
240.212637: <2> Workqueue: kblockd blk_mq_run_work_fn
240.212649: <2> pstate: 00c00085 (nzcv daIf +PAN +UAO)
240.212666: <2> pc : queued_spin_lock_slowpath+0x10c/0x2e0
240.212677: <2> lr : queued_spin_lock_slowpath+0x84/0x2e0
...
Call trace:
240.212865: <2> queued_spin_lock_slowpath+0x10c/0x2e0
240.212876: <2> do_raw_spin_lock+0xf0/0xf4
240.212890: <2> _raw_spin_lock_irq+0x74/0x94
240.212906: <2> bfq_dispatch_request+0x4c/0xd60
240.212918: <2> blk_mq_do_dispatch_sched+0xe0/0x1f0
240.212927: <2> blk_mq_sched_dispatch_requests+0x130/0x194
240.212940: <2> __blk_mq_run_hw_queue+0x100/0x158
240.212950: <2> blk_mq_run_work_fn+0x1c/0x28
240.212963: <2> process_one_work+0x280/0x460
240.212973: <2> worker_thread+0x27c/0x4dc
240.212986: <2> kthread+0x160/0x170
Fix this by cancelling the delayed work if any before elevator exits.
Changes since V1:
- Moved the logic into blk_cleanup_queue() as per Ming comments.
Signed-off-by: Pradeep P V K <ppvk@xxxxxxxxxxxxxx>
---
block/blk-mq.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4abb714..890fded 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2598,6 +2598,7 @@ static void blk_mq_exit_hw_queues(struct
request_queue *q,
break;
blk_mq_debugfs_unregister_hctx(hctx);
blk_mq_exit_hctx(q, set, hctx, i);
+ cancel_delayed_work_sync(&hctx->run_work);
}
}
It should be better to move cancel_delayed_work_sync() into
blk_mq_exit_hctx(), exactly before adding hctx into unused list.
Sure. i will do it in my next patch series.
Thanks,
Ming
Thanks and Regards,
Pradeep