On Tue, Feb 18, 2025 at 02:42:25PM -0800, Keith Busch wrote: > From: Keith Busch <kbusch@xxxxxxxxxx> > > Similar to the fixed file path, requests may depend on a previous one > to set up an index, so we need to allow linking them. The prep callback > happens too soon for linked commands, so the lookup needs to be deferred > to the issue path. Change the prep callbacks to just set the buf_index > and let generic io_uring code handle the fixed buffer node setup, just > like it already does for fixed files. > > Signed-off-by: Keith Busch <kbusch@xxxxxxxxxx> > --- ... > diff --git a/io_uring/net.c b/io_uring/net.c > index 000dc70d08d0d..39838e8575b53 100644 > --- a/io_uring/net.c > +++ b/io_uring/net.c > @@ -1373,6 +1373,10 @@ int io_send_zc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) > #endif > if (unlikely(!io_msg_alloc_async(req))) > return -ENOMEM; > + if (zc->flags & IORING_RECVSEND_FIXED_BUF) { > + req->buf_index = zc->buf_index; > + req->flags |= REQ_F_FIXED_BUFFER; > + } > if (req->opcode != IORING_OP_SENDMSG_ZC) > return io_send_setup(req, sqe); > return io_sendmsg_setup(req, sqe); > @@ -1434,25 +1438,10 @@ static int io_send_zc_import(struct io_kiocb *req, unsigned int issue_flags) > struct io_async_msghdr *kmsg = req->async_data; > int ret; > > - if (sr->flags & IORING_RECVSEND_FIXED_BUF) { > - struct io_ring_ctx *ctx = req->ctx; > - struct io_rsrc_node *node; > - > - ret = -EFAULT; > - io_ring_submit_lock(ctx, issue_flags); > - node = io_rsrc_node_lookup(&ctx->buf_table, sr->buf_index); > - if (node) { > - io_req_assign_buf_node(sr->notif, node); Here the node buffer is assigned to ->notif req, instead of the current request, and you may have to deal with this case here. Otherwise, this patch looks fine: Reviewed-by: Ming Lei <ming.lei@xxxxxxxxxx> Thanks, Ming