With dynamic buffer updates, registered buffers in the table may change at any moment. First of all we want to prevent future races between updating and importing (i.e. io_import_fixed()), where the latter one may happen without uring_lock held, e.g. from io-wq. A second problem is that currently we may do importing several times for IORING_OP_{READ,WRITE}_FIXED, e.g. getting -EAGAIN on an inline attempt and then redoing import after apoll/from iowq. In this case it can see two completely different buffers, that's not good, especially since we often hide short reads from the userspace. Copy iter when going async. There are concerns about performance. Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx> --- fs/io_uring.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/fs/io_uring.c b/fs/io_uring.c index cff8561d567a..c80b5fef159d 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -903,6 +903,7 @@ static const struct io_op_def io_op_defs[] = { .unbound_nonreg_file = 1, .pollin = 1, .plug = 1, + .needs_async_setup = 1, .async_size = sizeof(struct io_async_rw), }, [IORING_OP_WRITE_FIXED] = { @@ -911,6 +912,7 @@ static const struct io_op_def io_op_defs[] = { .unbound_nonreg_file = 1, .pollout = 1, .plug = 1, + .needs_async_setup = 1, .async_size = sizeof(struct io_async_rw), }, [IORING_OP_POLL_ADD] = { @@ -2683,6 +2685,10 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe) kiocb->ki_complete = io_complete_rw; } + if (req->opcode == IORING_OP_READ_FIXED || + req->opcode == IORING_OP_WRITE_FIXED) + io_req_set_rsrc_node(req); + req->rw.addr = READ_ONCE(sqe->addr); req->rw.len = READ_ONCE(sqe->len); req->buf_index = READ_ONCE(sqe->buf_index); @@ -5919,7 +5925,9 @@ static int io_req_prep_async(struct io_kiocb *req) switch (req->opcode) { case IORING_OP_READV: + case IORING_OP_READ_FIXED: return io_rw_prep_async(req, READ); + case IORING_OP_WRITE_FIXED: case IORING_OP_WRITEV: return io_rw_prep_async(req, WRITE); case IORING_OP_SENDMSG: -- 2.31.1