On 3/18/24 02:25, Jens Axboe wrote:
On 3/17/24 8:23 PM, Ming Lei wrote:
On Mon, Mar 18, 2024 at 12:41:47AM +0000, Pavel Begunkov wrote:
!IO_URING_F_UNLOCKED does not translate to availability of the deferred
completion infra, IO_URING_F_COMPLETE_DEFER does, that what we should
pass and look for to use io_req_complete_defer() and other variants.
Luckily, it's not a real problem as two wrongs actually made it right,
at least as far as io_uring_cmd_work() goes.
Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx>
Link: https://lore.kernel.org/r/eb08e72e837106963bc7bc7dccfd93d646cc7f36.1710514702.git.asml.silence@xxxxxxxxx
Signed-off-by: Jens Axboe <axboe@xxxxxxxxx>
oops, I should've removed all the signed-offs
---
io_uring/uring_cmd.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
index f197e8c22965..ec38a8d4836d 100644
--- a/io_uring/uring_cmd.c
+++ b/io_uring/uring_cmd.c
@@ -56,7 +56,11 @@ EXPORT_SYMBOL_GPL(io_uring_cmd_mark_cancelable);
static void io_uring_cmd_work(struct io_kiocb *req, struct io_tw_state *ts)
{
struct io_uring_cmd *ioucmd = io_kiocb_to_cmd(req, struct io_uring_cmd);
- unsigned issue_flags = ts->locked ? 0 : IO_URING_F_UNLOCKED;
+ unsigned issue_flags = IO_URING_F_UNLOCKED;
+
+ /* locked task_work executor checks the deffered list completion */
+ if (ts->locked)
+ issue_flags = IO_URING_F_COMPLETE_DEFER;
ioucmd->task_work_cb(ioucmd, issue_flags);
}
@@ -100,7 +104,9 @@ void io_uring_cmd_done(struct io_uring_cmd *ioucmd, ssize_t ret, ssize_t res2,
if (req->ctx->flags & IORING_SETUP_IOPOLL) {
/* order with io_iopoll_req_issued() checking ->iopoll_complete */
smp_store_release(&req->iopoll_completed, 1);
- } else if (!(issue_flags & IO_URING_F_UNLOCKED)) {
+ } else if (issue_flags & IO_URING_F_COMPLETE_DEFER) {
+ if (WARN_ON_ONCE(issue_flags & IO_URING_F_UNLOCKED))
+ return;
io_req_complete_defer(req);
} else {
req->io_task_work.func = io_req_task_complete;
'git-bisect' shows the reported warning starts from this patch.
Thanks Ming
That does make sense, as probably:
+ /* locked task_work executor checks the deffered list completion */
+ if (ts->locked)
+ issue_flags = IO_URING_F_COMPLETE_DEFER;
this assumption isn't true, and that would mess with the task management
(which is in your oops).
I'm missing it, how it's not true?
static void ctx_flush_and_put(struct io_ring_ctx *ctx, struct io_tw_state *ts)
{
...
if (ts->locked) {
io_submit_flush_completions(ctx);
...
}
}
static __cold void io_fallback_req_func(struct work_struct *work)
{
...
mutex_lock(&ctx->uring_lock);
llist_for_each_entry_safe(req, tmp, node, io_task_work.node)
req->io_task_work.func(req, &ts);
io_submit_flush_completions(ctx);
mutex_unlock(&ctx->uring_lock);
...
}
--
Pavel Begunkov