On Sun, Mar 17, 2024 at 09:11:27PM -0600, Jens Axboe wrote: > On 3/17/24 8:47 PM, Ming Lei wrote: > > On Sun, Mar 17, 2024 at 08:40:59PM -0600, Jens Axboe wrote: > >> On 3/17/24 8:32 PM, Pavel Begunkov wrote: > >>> On 3/18/24 02:25, Jens Axboe wrote: > >>>> On 3/17/24 8:23 PM, Ming Lei wrote: > >>>>> On Mon, Mar 18, 2024 at 12:41:47AM +0000, Pavel Begunkov wrote: > >>>>>> !IO_URING_F_UNLOCKED does not translate to availability of the deferred > >>>>>> completion infra, IO_URING_F_COMPLETE_DEFER does, that what we should > >>>>>> pass and look for to use io_req_complete_defer() and other variants. > >>>>>> > >>>>>> Luckily, it's not a real problem as two wrongs actually made it right, > >>>>>> at least as far as io_uring_cmd_work() goes. > >>>>>> > >>>>>> Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx> > >>>>>> Link: https://lore.kernel.org/r/eb08e72e837106963bc7bc7dccfd93d646cc7f36.1710514702.git.asml.silence@xxxxxxxxx > >>>>>> Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> > >>> > >>> oops, I should've removed all the signed-offs > >>> > >>>>>> --- > >>>>>> io_uring/uring_cmd.c | 10 ++++++++-- > >>>>>> 1 file changed, 8 insertions(+), 2 deletions(-) > >>>>>> > >>>>>> diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c > >>>>>> index f197e8c22965..ec38a8d4836d 100644 > >>>>>> --- a/io_uring/uring_cmd.c > >>>>>> +++ b/io_uring/uring_cmd.c > >>>>>> @@ -56,7 +56,11 @@ EXPORT_SYMBOL_GPL(io_uring_cmd_mark_cancelable); > >>>>>> static void io_uring_cmd_work(struct io_kiocb *req, struct io_tw_state *ts) > >>>>>> { > >>>>>> struct io_uring_cmd *ioucmd = io_kiocb_to_cmd(req, struct io_uring_cmd); > >>>>>> - unsigned issue_flags = ts->locked ? 0 : IO_URING_F_UNLOCKED; > >>>>>> + unsigned issue_flags = IO_URING_F_UNLOCKED; > >>>>>> + > >>>>>> + /* locked task_work executor checks the deffered list completion */ > >>>>>> + if (ts->locked) > >>>>>> + issue_flags = IO_URING_F_COMPLETE_DEFER; > >>>>>> ioucmd->task_work_cb(ioucmd, issue_flags); > >>>>>> } > >>>>>> @@ -100,7 +104,9 @@ void io_uring_cmd_done(struct io_uring_cmd *ioucmd, ssize_t ret, ssize_t res2, > >>>>>> if (req->ctx->flags & IORING_SETUP_IOPOLL) { > >>>>>> /* order with io_iopoll_req_issued() checking ->iopoll_complete */ > >>>>>> smp_store_release(&req->iopoll_completed, 1); > >>>>>> - } else if (!(issue_flags & IO_URING_F_UNLOCKED)) { > >>>>>> + } else if (issue_flags & IO_URING_F_COMPLETE_DEFER) { > >>>>>> + if (WARN_ON_ONCE(issue_flags & IO_URING_F_UNLOCKED)) > >>>>>> + return; > >>>>>> io_req_complete_defer(req); > >>>>>> } else { > >>>>>> req->io_task_work.func = io_req_task_complete; > >>>>> > >>>>> 'git-bisect' shows the reported warning starts from this patch. > >>> > >>> Thanks Ming > >>> > >>>> > >>>> That does make sense, as probably: > >>>> > >>>> + /* locked task_work executor checks the deffered list completion */ > >>>> + if (ts->locked) > >>>> + issue_flags = IO_URING_F_COMPLETE_DEFER; > >>>> > >>>> this assumption isn't true, and that would mess with the task management > >>>> (which is in your oops). > >>> > >>> I'm missing it, how it's not true? > >>> > >>> > >>> static void ctx_flush_and_put(struct io_ring_ctx *ctx, struct io_tw_state *ts) > >>> { > >>> ... > >>> if (ts->locked) { > >>> io_submit_flush_completions(ctx); > >>> ... > >>> } > >>> } > >>> > >>> static __cold void io_fallback_req_func(struct work_struct *work) > >>> { > >>> ... > >>> mutex_lock(&ctx->uring_lock); > >>> llist_for_each_entry_safe(req, tmp, node, io_task_work.node) > >>> req->io_task_work.func(req, &ts); > >>> io_submit_flush_completions(ctx); > >>> mutex_unlock(&ctx->uring_lock); > >>> ... > >>> } > >> > >> I took a look too, and don't immediately see it. Those are also the two > >> only cases I found, and before the patches, looks fine too. > >> > >> So no immediate answer there... But I can confirm that before this > >> patch, test passes fine. With the patch, it goes boom pretty quick. > >> Either directly off putting the task, or an unrelated memory crash > >> instead. > > > > In ublk, the translated 'issue_flags' is passed to io_uring_cmd_done() > > from ioucmd->task_work_cb()(__ublk_rq_task_work()). That might be > > related with the reason. > > Or maybe ublk is doing multiple invocations of task_work completions? I > added this: Yes, your debug log & point does help. This patch convert zero flag(!IO_URING_F_UNLOCKED) into IO_URING_F_COMPLETE_DEFER, and somewhere is easily ignored, and follows the fix, which need to be folded into patch 2. diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 5d4b448fdc50..22f2b52390a9 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -3259,7 +3259,8 @@ static bool io_uring_try_cancel_uring_cmd(struct io_ring_ctx *ctx, /* ->sqe isn't available if no async data */ if (!req_has_async_data(req)) cmd->sqe = NULL; - file->f_op->uring_cmd(cmd, IO_URING_F_CANCEL); + file->f_op->uring_cmd(cmd, IO_URING_F_CANCEL | + IO_URING_F_COMPLETE_DEFER); ret = true; } } Thanks, Ming