If we use a buffer group ID that is large enough to require io_uring to allocate it, then we don't correctly free it if the cleanup is deferred to the ring exit. The explicit removal paths are fine. Fixes: 9cfc7e94e42b ("io_uring: get rid of hashed provided buffer groups") Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> --- diff --git a/fs/io_uring.c b/fs/io_uring.c index f7e1fc85d266..ccb47d87a65a 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -11065,6 +11065,7 @@ static void io_destroy_buffers(struct io_ring_ctx *ctx) xa_for_each(&ctx->io_bl_xa, index, bl) { xa_erase(&ctx->io_bl_xa, bl->bgid); __io_remove_buffers(ctx, bl, -1U); + kfree(bl); } while (!list_empty(&ctx->io_buffers_pages)) { -- Jens Axboe