在 2021/9/29 下午7:13, Pavel Begunkov 写道:
On 9/27/21 1:36 PM, Hao Xu wrote:
For multishot mode, there may be cases like:
io_poll_task_func()
-> add_wait_queue()
async_wake()
->io_req_task_work_add()
this one mess up the running task_work list
since req->io_task_work.node is in use.
By the time req->io_task_work.func() is called, node->next is undefined
and free to use. io_req_task_work_add() will override it without looking
at a prior value and that's fine, as the calling code doesn't touch it
after the callback.
I misunderstood the code, since node->next will be reset to NULL in
wq_list_add_tail(), so no problem here.
similar situation for req->io_task_work.fallback_node.
Fix it by set node->next = NULL before we run the tw, so that when we
add req back to the wait queue in middle of tw running, we can safely
re-add it to the tw list.
I might be missing what is the problem you're trying to fix. Does the
above makes sense? It doesn't sound like node->next=NULL can solve
anything.
Fixes: 7cbf1722d5fc ("io_uring: provide FIFO ordering for task_work")
Signed-off-by: Hao Xu <haoxu@xxxxxxxxxxxxxxxxx>
---
fs/io_uring.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index d0b358b9b589..f667d6286438 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1250,13 +1250,17 @@ static void io_fallback_req_func(struct work_struct *work)
struct io_ring_ctx *ctx = container_of(work, struct io_ring_ctx,
fallback_work.work);
struct llist_node *node = llist_del_all(&ctx->fallback_llist);
- struct io_kiocb *req, *tmp;
+ struct io_kiocb *req;
bool locked = false;
percpu_ref_get(&ctx->refs);
- llist_for_each_entry_safe(req, tmp, node, io_task_work.fallback_node)
+ req = llist_entry(node, struct io_kiocb, io_task_work.fallback_node);
+ while (member_address_is_nonnull(req, io_task_work.fallback_node)) {
+ node = req->io_task_work.fallback_node.next;
+ req->io_task_work.fallback_node.next = NULL;
req->io_task_work.func(req, &locked);
-
+ req = llist_entry(node, struct io_kiocb, io_task_work.fallback_node);
+ }
if (locked) {
io_submit_flush_completions(ctx);
mutex_unlock(&ctx->uring_lock);
@@ -2156,6 +2160,7 @@ static void tctx_task_work(struct callback_head *cb)
locked = mutex_trylock(&ctx->uring_lock);
percpu_ref_get(&ctx->refs);
}
+ node->next = NULL;
req->io_task_work.func(req, &locked);
node = next;
} while (node);