Gabriel Krisman Bertazi <krisman@xxxxxxx> writes: > The allocation of struct io_buffer for metadata of provided buffers is > done through a custom allocator that directly gets pages and > fragments them. But, slab would do just fine, as this is not a hot path > (in fact, it is a deprecated feature) and, by keeping a custom allocator > implementation we lose benefits like tracking, poisoning, > sanitizers. Finally, the custom code is more complex and requires > keeping the list of pages in struct ctx for no good reason. This patch > cleans this path up and just uses slab. > > I microbenchmarked it by forcing the allocation of a large number of > objects with the least number of io_uring commands possible (keeping > nbufs=USHRT_MAX), with and without the patch. There is a slight > increase in time spent in the allocation with slab, of course, but even > when allocating to system resources exhaustion, which is not very > realistic and happened around 1/2 billion provided buffers for me, it > wasn't a significant hit in system time. Specially if we think of a > real-world scenario, an application doing register/unregister of > provided buffers will hit ctx->io_buffers_cache more often than actually > going to slab. > > Signed-off-by: Gabriel Krisman Bertazi <krisman@xxxxxxx> Hi Jens, Any feedback on this? -- Gabriel Krisman Bertazi