This is a note to let you know that I've just added the patch titled io_uring/kbuf: protect io_buffer_list teardown with a reference to the 6.6-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: io_uring-kbuf-protect-io_buffer_list-teardown-with-a-reference.patch and it can be found in the queue-6.6 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 6b69c4ab4f685327d9e10caf0d84217ba23a8c4b Mon Sep 17 00:00:00 2001 From: Jens Axboe <axboe@xxxxxxxxx> Date: Fri, 15 Mar 2024 16:12:51 -0600 Subject: io_uring/kbuf: protect io_buffer_list teardown with a reference From: Jens Axboe <axboe@xxxxxxxxx> commit 6b69c4ab4f685327d9e10caf0d84217ba23a8c4b upstream. No functional changes in this patch, just in preparation for being able to keep the buffer list alive outside of the ctx->uring_lock. Cc: stable@xxxxxxxxxxxxxxx # v6.4+ Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- io_uring/kbuf.c | 15 +++++++++++---- io_uring/kbuf.h | 2 ++ 2 files changed, 13 insertions(+), 4 deletions(-) --- a/io_uring/kbuf.c +++ b/io_uring/kbuf.c @@ -59,6 +59,7 @@ static int io_buffer_add_list(struct io_ * always under the ->uring_lock, but the RCU lookup from mmap does. */ bl->bgid = bgid; + atomic_set(&bl->refs, 1); return xa_err(xa_store(&ctx->io_bl_xa, bgid, bl, GFP_KERNEL)); } @@ -272,6 +273,14 @@ static int __io_remove_buffers(struct io return i; } +static void io_put_bl(struct io_ring_ctx *ctx, struct io_buffer_list *bl) +{ + if (atomic_dec_and_test(&bl->refs)) { + __io_remove_buffers(ctx, bl, -1U); + kfree_rcu(bl, rcu); + } +} + void io_destroy_buffers(struct io_ring_ctx *ctx) { struct io_buffer_list *bl; @@ -279,8 +288,7 @@ void io_destroy_buffers(struct io_ring_c xa_for_each(&ctx->io_bl_xa, index, bl) { xa_erase(&ctx->io_bl_xa, bl->bgid); - __io_remove_buffers(ctx, bl, -1U); - kfree_rcu(bl, rcu); + io_put_bl(ctx, bl); } while (!list_empty(&ctx->io_buffers_pages)) { @@ -676,9 +684,8 @@ int io_unregister_pbuf_ring(struct io_ri if (!bl->is_mapped) return -EINVAL; - __io_remove_buffers(ctx, bl, -1U); xa_erase(&ctx->io_bl_xa, bl->bgid); - kfree_rcu(bl, rcu); + io_put_bl(ctx, bl); return 0; } --- a/io_uring/kbuf.h +++ b/io_uring/kbuf.h @@ -25,6 +25,8 @@ struct io_buffer_list { __u16 head; __u16 mask; + atomic_t refs; + /* ring mapped provided buffers */ __u8 is_mapped; /* ring mapped provided buffers, but mmap'ed by application */ Patches currently in stable-queue which might be from axboe@xxxxxxxxx are queue-6.6/io_uring-use-private-workqueue-for-exit-work.patch queue-6.6/io_uring-kbuf-protect-io_buffer_list-teardown-with-a-reference.patch queue-6.6/io_uring-kbuf-hold-io_buffer_list-reference-over-mmap.patch queue-6.6/io_uring-kbuf-get-rid-of-lower-bgid-lists.patch queue-6.6/io_uring-kbuf-get-rid-of-bl-is_ready.patch