On 8/20/21 10:21 AM, Hao Xu wrote: > 在 2021/8/18 下午7:42, Pavel Begunkov 写道: >> io_fallback_req_func() doesn't expect anyone creating inline >> completions, and no one currently does that. Teach the function to flush >> completions preparing for further changes. >> >> Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx> >> --- >> fs/io_uring.c | 5 +++++ >> 1 file changed, 5 insertions(+) >> >> diff --git a/fs/io_uring.c b/fs/io_uring.c >> index 3da9f1374612..ba087f395507 100644 >> --- a/fs/io_uring.c >> +++ b/fs/io_uring.c >> @@ -1197,6 +1197,11 @@ static void io_fallback_req_func(struct work_struct *work) >> percpu_ref_get(&ctx->refs); >> llist_for_each_entry_safe(req, tmp, node, io_task_work.fallback_node) >> req->io_task_work.func(req); >> + >> + mutex_lock(&ctx->uring_lock); >> + if (ctx->submit_state.compl_nr) >> + io_submit_flush_completions(ctx); >> + mutex_unlock(&ctx->uring_lock); > why do we protect io_submit_flush_completions() with uring_lock, > regarding that it is called in original context. Btw, why not > use ctx_flush_and_put() The fallback thing is called from a workqueue not the submitter task context. See delayed_work and so. Regarding locking, it touches struct io_submit_state, and it's protected by ->uring_lock. In particular we're interested in ->reqs and ->free_list. FWIW, there is refurbishment going on around submit state, so if proves useful the locking may change in coming months. ctx_flush_and_put() could have been used, but simpler to hand code it and avoid the (always messy) conditional ref grabbing and locking. -- Pavel Begunkov