To use ublk zero copy, an application submits a sequence of io_uring operations: (1) Register a ublk request's buffer into the fixed buffer table (2) Use the fixed buffer in some I/O operation (3) Unregister the buffer from the fixed buffer table The ordering of these operations is critical; if the fixed buffer lookup occurs before the register or after the unregister operation, the I/O will fail with EFAULT or even corrupt a different ublk request's buffer. It is possible to guarantee the correct order by linking the operations, but that adds overhead and doesn't allow multiple I/O operations to execute in parallel using the same ublk request's buffer. Ideally, the application could just submit the register, I/O, and unregister SQEs in the desired order without links and io_uring would ensure the ordering. This mostly works, leveraging the fact that each io_uring SQE is prepped and issued non-blocking in order (barring link, drain, and force-async flags). But it requires the fixed buffer lookup to occur during the initial non-blocking issue. This patch series fixes the 2 gaps where the initial issue can return EAGAIN before looking up the fixed buffer: - IORING_OP_SEND_ZC using IORING_RECVSEND_POLL_FIRST - IORING_OP_URING_CMD, of which NVMe passthru is currently the only fixed buffer user. blk_mq_alloc_request() can return EAGAIN before io_uring_cmd_import_fixed() is called to look up the fixed buffer. Caleb Sander Mateos (3): io_uring/net: only import send_zc buffer once io_uring/net: import send_zc fixed buffer before going async io_uring/uring_cmd: import fixed buffer before going async drivers/nvme/host/ioctl.c | 10 ++++------ include/linux/io_uring/cmd.h | 6 ++---- io_uring/net.c | 13 ++++++++----- io_uring/rsrc.c | 6 ++++++ io_uring/rsrc.h | 2 ++ io_uring/uring_cmd.c | 10 +++++++--- 6 files changed, 29 insertions(+), 18 deletions(-) -- 2.45.2