On Mon, Mar 18, 2024 at 12:46:25PM +0000, Pavel Begunkov wrote: > On 3/18/24 11:59, Ming Lei wrote: > > On Mon, Mar 18, 2024 at 11:50:32AM +0000, Pavel Begunkov wrote: > > > On 3/18/24 08:10, Ming Lei wrote: > > > > On Mon, Mar 18, 2024 at 12:41:48AM +0000, Pavel Begunkov wrote: > > > > > io_uring_cmd_done() is called from the irq context and is considered to > > > > > be irq safe, however that's not the case if the driver requires > > > > > cancellations because io_uring_cmd_del_cancelable() could try to take > > > > > the uring_lock mutex. > > > > > > > > > > Clean up the confusion, by deferring cancellation handling to > > > > > locked task_work if we came into io_uring_cmd_done() from iowq > > > > > or other IO_URING_F_UNLOCKED path. > > > > > > > > > > Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx> > > > > > --- > > > > > io_uring/uring_cmd.c | 24 +++++++++++++++++------- > > > > > 1 file changed, 17 insertions(+), 7 deletions(-) > > > > > > > > > > diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c > > > > > index ec38a8d4836d..9590081feb2d 100644 > > > > > --- a/io_uring/uring_cmd.c > > > > > +++ b/io_uring/uring_cmd.c > > > > > @@ -14,19 +14,18 @@ > > > > > #include "rsrc.h"on #include "uring_cmd.h" > > > > > -static void io_uring_cmd_del_cancelable(struct io_uring_cmd *cmd, > > > > > - unsigned int issue_flags) > > > > > +static void io_uring_cmd_del_cancelable(struct io_uring_cmd *cmd) > > > > > { > > > > > struct io_kiocb *req = cmd_to_io_kiocb(cmd); > > > > > struct io_ring_ctx *ctx = req->ctx; > > > > > + lockdep_assert_held(&ctx->uring_lock); > > > > > + > > > > > if (!(cmd->flags & IORING_URING_CMD_CANCELABLE)) > > > > > return; > > > > > cmd->flags &= ~IORING_URING_CMD_CANCELABLE; > > > > > - io_ring_submit_lock(ctx, issue_flags); > > > > > hlist_del(&req->hash_node); > > > > > - io_ring_submit_unlock(ctx, issue_flags); > > > > > } > > > > > /* > > > > > @@ -44,6 +43,9 @@ void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd, > > > > > struct io_kiocb *req = cmd_to_io_kiocb(cmd); > > > > > struct io_ring_ctx *ctx = req->ctx; > > > > > + if (WARN_ON_ONCE(ctx->flags & IORING_SETUP_IOPOLL)) > > > > > + return; > > > > > + > > > > > > > > This way limits cancelable command can't be used in iopoll context, and > > > > it isn't reasonable, and supporting iopoll has been in ublk todo list, > > > > especially single ring context is shared for both command and normal IO. > > > > > > That's something that can be solved when it's needed, and hence the > > > warning so it's not missed. That would need del_cancelable on the > > > ->iopoll side, but depends on the "leaving in cancel queue" > > > problem resolution. > > > > The current code is actually working with iopoll, so adding the warning > > can be one regression. Maybe someone has been using ublk with iopoll. > > Hmm, I don't see ->uring_cmd_iopoll implemented anywhere, and w/o > it io_uring should fail the request: > > int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags) > { > ... > if (ctx->flags & IORING_SETUP_IOPOLL) { > if (!file->f_op->uring_cmd_iopoll) > return -EOPNOTSUPP; > issue_flags |= IO_URING_F_IOPOLL; > } > } > > Did I miss it somewhere? Looks I missed the point, now the WARN() is fine. Thanks, Ming