On 2/12/25 2:58 PM, Caleb Sander wrote: > On Wed, Feb 12, 2025 at 1:02?PM Jens Axboe <axboe@xxxxxxxxx> wrote: >> >> On 2/12/25 1:55 PM, Jens Axboe wrote: >>> On 2/12/25 1:45 PM, Caleb Sander Mateos wrote: >>>> In our application issuing NVMe passthru commands, we have observed >>>> nvme_uring_cmd fields being corrupted between when userspace initializes >>>> the io_uring SQE and when nvme_uring_cmd_io() processes it. >>>> >>>> We hypothesized that the uring_cmd's were executing asynchronously after >>>> the io_uring_enter() syscall returned, yet were still reading the SQE in >>>> the userspace-mapped SQ. Since io_uring_enter() had already incremented >>>> the SQ head index, userspace reused the SQ slot for a new SQE once the >>>> SQ wrapped around to it. >>>> >>>> We confirmed this hypothesis by "poisoning" all SQEs up to the SQ head >>>> index in userspace upon return from io_uring_enter(). By overwriting the >>>> nvme_uring_cmd nsid field with a known garbage value, we were able to >>>> trigger the err message in nvme_validate_passthru_nsid(), which logged >>>> the garbage nsid value. >>>> >>>> The issue is caused by commit 5eff57fa9f3a ("io_uring/uring_cmd: defer >>>> SQE copying until it's needed"). With this commit reverted, the poisoned >>>> values in the SQEs are no longer seen by nvme_uring_cmd_io(). >>>> >>>> Prior to the commit, each uring_cmd SQE was unconditionally memcpy()ed >>>> to async_data at prep time. The commit moved this memcpy() to 2 cases >>>> when the request goes async: >>>> - If REQ_F_FORCE_ASYNC is set to force the initial issue to go async >>>> - If ->uring_cmd() returns -EAGAIN in the initial non-blocking issue >>>> >>>> This patch set fixes a bug in the EAGAIN case where the uring_cmd's sqe >>>> pointer is not updated to point to async_data after the memcpy(), >>>> as it correctly is in the REQ_F_FORCE_ASYNC case. >>>> >>>> However, uring_cmd's can be issued async in other cases not enumerated >>>> by 5eff57fa9f3a, also leading to SQE corruption. These include requests >>>> besides the first in a linked chain, which are only issued once prior >>>> requests complete. Requests waiting for a drain to complete would also >>>> be initially issued async. >>>> >>>> While it's probably possible for io_uring_cmd_prep_setup() to check for >>>> each of these cases and avoid deferring the SQE memcpy(), we feel it >>>> might be safer to revert 5eff57fa9f3a to avoid the corruption risk. >>>> As discussed recently in regard to the ublk zero-copy patches[1], new >>>> async paths added in the future could break these delicate assumptions. >>> >>> I don't think it's particularly delicate - did you manage to catch the >>> case queueing a request for async execution where the sqe wasn't already >>> copied? I did take a quick look after our out-of-band conversation, and >>> the only missing bit I immediately spotted is using SQPOLL. But I don't >>> think you're using that, right? And in any case, lifetime of SQEs with >>> SQPOLL is the duration of the request anyway, so should not pose any >>> risk of overwriting SQEs. But I do think the code should copy for that >>> case too, just to avoid it being a harder-to-use thing than it should >>> be. >>> >>> The two patches here look good, I'll go ahead with those. That'll give >>> us a bit of time to figure out where this missing copy is. >> >> Can you try this on top of your 2 and see if you still hit anything odd? >> >> diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c >> index bcfca18395c4..15a8a67f556e 100644 >> --- a/io_uring/uring_cmd.c >> +++ b/io_uring/uring_cmd.c >> @@ -177,10 +177,13 @@ static void io_uring_cmd_cache_sqes(struct io_kiocb *req) >> ioucmd->sqe = cache->sqes; >> } >> >> +#define SQE_COPY_FLAGS (REQ_F_FORCE_ASYNC|REQ_F_LINK|REQ_F_HARDLINK|REQ_F_IO_DRAIN) > > I believe this still misses the last request in a linked chain, which > won't have REQ_F_LINK/REQ_F_HARDLINK set? Yeah good point, I think we should just be looking at link->head instead to see if the request is a link, or part of a linked submission. That may overshoot a bit, but that should be fine - it'll be a false positive. Alternatively, we can still check link flags and compare with link->last instead... But the whole thing still feels a bit iffy. The whole uring_cmd setup with an SQE that's sometimes the actual SQE, and sometimes a copy when needed, does not fill me with joy. > IOSQE_IO_DRAIN also causes subsequent operations to be issued async; > is REQ_F_IO_DRAIN set on those operations too? The first 8 flags are directly set in the io_kiocb at init time. So if IOSQE_IO_DRAIN is set, then REQ_F_IO_DRAIN will be set as they are one and the same. diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c index bcfca18395c4..9e60b5bb5a60 100644 --- a/io_uring/uring_cmd.c +++ b/io_uring/uring_cmd.c @@ -177,10 +177,14 @@ static void io_uring_cmd_cache_sqes(struct io_kiocb *req) ioucmd->sqe = cache->sqes; } +#define SQE_COPY_FLAGS (REQ_F_FORCE_ASYNC|REQ_F_IO_DRAIN) + static int io_uring_cmd_prep_setup(struct io_kiocb *req, const struct io_uring_sqe *sqe) { struct io_uring_cmd *ioucmd = io_kiocb_to_cmd(req, struct io_uring_cmd); + struct io_ring_ctx *ctx = req->ctx; + struct io_submit_link *link = &ctx->submit_state.link; struct io_uring_cmd_data *cache; cache = io_uring_alloc_async_data(&req->ctx->uring_cache, req); @@ -190,7 +194,8 @@ static int io_uring_cmd_prep_setup(struct io_kiocb *req, ioucmd->sqe = sqe; /* defer memcpy until we need it */ - if (unlikely(req->flags & REQ_F_FORCE_ASYNC)) + if (unlikely(ctx->flags & IORING_SETUP_SQPOLL || + req->flags & SQE_COPY_FLAGS || link->head)) io_uring_cmd_cache_sqes(req); return 0; } -- Jens Axboe