When io_check_coalesce_buffer() meets a single page buffer it bails out and tells that it can be coalesced. That's fine for registered buffers as io_coalesce_buffer() wouldn't change anything, but the region code now uses the function to decided on whether to vmap the buffer or not. Report that a single page buffer is trivially coalescable and let io_sqe_buffer_register() to filter them. Fixes: cc2f1a864c27a ("io_uring/memmap: optimise single folio regions") Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx> --- io_uring/rsrc.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index 9208cf77c41e..0f1cd0a1d880 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -675,14 +675,9 @@ bool io_check_coalesce_buffer(struct page **page_array, int nr_pages, unsigned int count = 1, nr_folios = 1; int i; - if (nr_pages <= 1) - return false; - data->nr_pages_mid = folio_nr_pages(folio); - if (data->nr_pages_mid == 1) - return false; - data->folio_shift = folio_shift(folio); + /* * Check if pages are contiguous inside a folio, and all folios have * the same page count except for the head and tail. @@ -750,8 +745,10 @@ static struct io_rsrc_node *io_sqe_buffer_register(struct io_ring_ctx *ctx, } /* If it's huge page(s), try to coalesce them into fewer bvec entries */ - if (io_check_coalesce_buffer(pages, nr_pages, &data)) - coalesced = io_coalesce_buffer(&pages, &nr_pages, &data); + if (nr_pages > 1 && io_check_coalesce_buffer(pages, nr_pages, &data)) { + if (data.nr_pages_mid != 1) + coalesced = io_coalesce_buffer(&pages, &nr_pages, &data); + } imu = kvmalloc(struct_size(imu, bvec, nr_pages), GFP_KERNEL); if (!imu) -- 2.47.1