On 5/26/21 4:52 PM, Marco Elver wrote: > Due to some moving around of code, the patch lost the actual fix (using > atomically read io_wq) -- so here it is again ... hopefully as intended. > :-) "fortify" damn it... It was synchronised with &ctx->uring_lock before, see io_uring_try_cancel_iowq() and io_uring_del_tctx_node(), so should not clear before *del_tctx_node() The fix should just move it after this sync point. Will you send it out as a patch? diff --git a/fs/io_uring.c b/fs/io_uring.c index 7db6aaf31080..b76ba26b4c6c 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -9075,11 +9075,12 @@ static void io_uring_clean_tctx(struct io_uring_task *tctx) struct io_tctx_node *node; unsigned long index; - tctx->io_wq = NULL; xa_for_each(&tctx->xa, index, node) io_uring_del_tctx_node(index); - if (wq) + if (wq) { + tctx->io_wq = NULL; io_wq_put_and_exit(wq); + } } static s64 tctx_inflight(struct io_uring_task *tctx, bool tracked) -- Pavel Begunkov