On 8/6/20 1:39 AM, Stefano Garzarella wrote: > On Wed, Aug 05, 2020 at 01:02:23PM -0600, Jens Axboe wrote: >> If we hit an earlier error path in io_uring_create(), then we will have >> accounted memory, but not set ctx->{sq,cq}_entries yet. Then when the >> ring is torn down in error, we use those values to unaccount the memory. >> >> Ensure we set the ctx entries before we're able to hit a potential error >> path. >> >> Cc: stable@xxxxxxxxxxxxxxx >> Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> >> --- >> fs/io_uring.c | 6 ++++-- >> 1 file changed, 4 insertions(+), 2 deletions(-) >> >> diff --git a/fs/io_uring.c b/fs/io_uring.c >> index 8f96566603f3..0d857f7ca507 100644 >> --- a/fs/io_uring.c >> +++ b/fs/io_uring.c >> @@ -8193,6 +8193,10 @@ static int io_allocate_scq_urings(struct io_ring_ctx *ctx, >> struct io_rings *rings; >> size_t size, sq_array_offset; >> >> + /* make sure these are sane, as we already accounted them */ >> + ctx->sq_entries = p->sq_entries; >> + ctx->cq_entries = p->cq_entries; >> + >> size = rings_size(p->sq_entries, p->cq_entries, &sq_array_offset); >> if (size == SIZE_MAX) >> return -EOVERFLOW; >> @@ -8209,8 +8213,6 @@ static int io_allocate_scq_urings(struct io_ring_ctx *ctx, >> rings->cq_ring_entries = p->cq_entries; >> ctx->sq_mask = rings->sq_ring_mask; >> ctx->cq_mask = rings->cq_ring_mask; >> - ctx->sq_entries = rings->sq_ring_entries; >> - ctx->cq_entries = rings->cq_ring_entries; >> >> size = array_size(sizeof(struct io_uring_sqe), p->sq_entries); >> if (size == SIZE_MAX) { >> -- >> 2.28.0 >> > > While reviewing I was asking if we should move io_account_mem() before > io_allocate_scq_urings(), then I saw the second patch :-) Indeed, just split it in two to avoid any extra issues around backporting. > Reviewed-by: Stefano Garzarella <sgarzare@xxxxxxxxxx> Thanks, added. -- Jens Axboe