Hi, When selecting provided buffers for a send/recv for bundles, there's no reason why the number of buffers selected is the same as the mapped segments that will be passed to send/recv. If some (or all) of these buffers are virtually contigious, then they can get collapsed into much fewer segments. Sometimes even just a single segment. This avoids costly iteration on the send/recv processing side. The return value is the number of bytes sent/received, and the starting buffer ID where the operation begun. This is again identical to how bundles work, from the application point of view this doesn't change anything in terms of how send/recv bundles are handled, hence this is a transparent feature. Patch 1-3 are just basic prep patches, and patch 4 allows for actual coalescing of segments. This is only enabled for bundles, as those are the types of requests that process multiple buffers in a single operation. Patches are on top of 6.11-rc3 with pending io_uring patches, as well as the incremental buffer consumption patches [1] posted earlier today. io_uring/kbuf.c | 71 ++++++++++++++++++++++++++++++++++++++++++------- io_uring/kbuf.h | 7 +++-- io_uring/net.c | 55 +++++++++++++++++--------------------- io_uring/net.h | 1 + 4 files changed, 91 insertions(+), 43 deletions(-) -- Jens Axboe