On 6/2/20 1:16 PM, Jann Horn wrote: > On Tue, Jun 2, 2020 at 8:42 PM Jens Axboe <axboe@xxxxxxxxx> wrote: >> On 6/2/20 12:22 PM, Jann Horn wrote: >>> On Sun, May 31, 2020 at 10:19 PM Jens Axboe <axboe@xxxxxxxxx> wrote: >>>> We just need this ported to stable once it goes into 5.8-rc: >>>> >>>> https://git.kernel.dk/cgit/linux-block/commit/?h=for-5.8/io_uring&id=904fbcb115c85090484dfdffaf7f461d96fe8e53 >>> >>> How does that work? Who guarantees that the close operation can't drop >>> the refcount of the uring instance to zero before reaching the fdput() >>> in io_uring_enter? >> >> Because io_uring_enter() holds a reference to it as well? > > Which reference do you mean? fdget() doesn't take a reference if the > calling process is single-threaded, you'd have to use fget() for that. I meant the ctx->refs, but that's not enough for the file, good point. I'll apply the below on top - that should fix the issue with O_PATH still, while retaining our logic not to allow ring closure. I think we could make ring closure work, but I don't want to use fget() if I can avoid it. And it really doesn't seem worth it to go through the trouble of adding any extra code to allow ring closure. diff --git a/fs/io_uring.c b/fs/io_uring.c index 732ec73ec3c0..2ce972d9a49e 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -696,6 +696,8 @@ struct io_op_def { unsigned needs_mm : 1; /* needs req->file assigned */ unsigned needs_file : 1; + /* don't fail if file grab fails */ + unsigned needs_file_no_error : 1; /* hash wq insertion if file is a regular file */ unsigned hash_reg_file : 1; /* unbound wq insertion if file is a non-regular file */ @@ -802,6 +804,8 @@ static const struct io_op_def io_op_defs[] = { .needs_fs = 1, }, [IORING_OP_CLOSE] = { + .needs_file = 1, + .needs_file_no_error = 1, .file_table = 1, }, [IORING_OP_FILES_UPDATE] = { @@ -3424,6 +3428,10 @@ static int io_close_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) return -EBADF; req->close.fd = READ_ONCE(sqe->fd); + if ((req->file && req->file->f_op == &io_uring_fops) || + req->close.fd == req->ctx->ring_fd) + return -EBADF; + return 0; } @@ -5437,19 +5445,20 @@ static int io_file_get(struct io_submit_state *state, struct io_kiocb *req, return -EBADF; fd = array_index_nospec(fd, ctx->nr_user_files); file = io_file_from_index(ctx, fd); - if (!file) - return -EBADF; - req->fixed_file_refs = ctx->file_data->cur_refs; - percpu_ref_get(req->fixed_file_refs); + if (file) { + req->fixed_file_refs = ctx->file_data->cur_refs; + percpu_ref_get(req->fixed_file_refs); + } } else { trace_io_uring_file_get(ctx, fd); file = __io_file_get(state, fd); - if (unlikely(!file)) - return -EBADF; } - *out_file = file; - return 0; + if (file || io_op_defs[req->opcode].needs_file_no_error) { + *out_file = file; + return 0; + } + return -EBADF; } static int io_req_set_file(struct io_submit_state *state, struct io_kiocb *req, -- Jens Axboe