Hi, This series cleans up kbuf management a bit. First two patches get rid of our array of buffer_lists, as in my testing there's no discernable difference between the xarray lookup and our array. This also then gets rid of any difference between lower and higher buffer group IDs, which is nice. Patch 3 starts using vmap for the non-mmap case for provided buffer rings, which means we can clean up the buffer indexing in io_ring_buffer_select() as well as there's now no difference between how we handle mmap vs gup versionf of buffer lists. Patches 4 and 5 are prep patches for patch 6, which switches the mmap buffer_list variant away from remap_pfn_range() and uses vm_insert_pages() instead. This is how it should've been done initially, and as per the diffstat, it's a nice reduction in code as well. include/linux/io_uring_types.h | 4 - io_uring/io_uring.c | 32 ++--- io_uring/io_uring.h | 3 - io_uring/kbuf.c | 298 ++++++++++++++--------------------------- io_uring/kbuf.h | 8 +- mm/nommu.c | 7 + 6 files changed, 119 insertions(+), 233 deletions(-) -- Jens Axboe