When IORING_OP_SEND_ZC is used with the IORING_RECVSEND_POLL_FIRST flag, the initial issue will return -EAGAIN to force arming the poll handler. If the operation is also using fixed buffers, the fixed buffer lookup does not happen until the subsequent issue. This ordering difference is observable when using UBLK_U_IO_{,UN}REGISTER_IO_BUF SQEs to modify the fixed buffer table. If the IORING_OP_SEND_ZC operation is followed immediately by a UBLK_U_IO_UNREGISTER_IO_BUF that unregisters the fixed buffer, IORING_RECVSEND_POLL_FIRST will cause the fixed buffer lookup to fail because it happens after the buffer is unregistered. Swap the order of the buffer import and IORING_RECVSEND_POLL_FIRST check to ensure the fixed buffer lookup happens on the initial issue even if the operation goes async. Signed-off-by: Caleb Sander Mateos <csander@xxxxxxxxxxxxxxx> Fixes: 27cb27b6d5ea ("io_uring: add support for kernel registered bvecs") --- io_uring/net.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/io_uring/net.c b/io_uring/net.c index a29893d567b8..5adc7b80138e 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -1367,21 +1367,21 @@ int io_send_zc(struct io_kiocb *req, unsigned int issue_flags) if (unlikely(!sock)) return -ENOTSOCK; if (!test_bit(SOCK_SUPPORT_ZC, &sock->flags)) return -EOPNOTSUPP; - if (!(req->flags & REQ_F_POLLED) && - (zc->flags & IORING_RECVSEND_POLL_FIRST)) - return -EAGAIN; - if (!zc->imported) { zc->imported = true; ret = io_send_zc_import(req, issue_flags); if (unlikely(ret)) return ret; } + if (!(req->flags & REQ_F_POLLED) && + (zc->flags & IORING_RECVSEND_POLL_FIRST)) + return -EAGAIN; + msg_flags = zc->msg_flags; if (issue_flags & IO_URING_F_NONBLOCK) msg_flags |= MSG_DONTWAIT; if (msg_flags & MSG_WAITALL) min_ret = iov_iter_count(&kmsg->msg.msg_iter); -- 2.45.2