On Fri, 6 Nov 2020 19:19:07 +0100 Lorenzo Bianconi <lorenzo@xxxxxxxxxx> wrote: > XDP bulk APIs introduce a defer/flush mechanism to return > pages belonging to the same xdp_mem_allocator object > (identified via the mem.id field) in bulk to optimize > I-cache and D-cache since xdp_return_frame is usually run > inside the driver NAPI tx completion loop. > The bulk queue size is set to 16 to be aligned to how > XDP_REDIRECT bulking works. The bulk is flushed when > it is full or when mem.id changes. > xdp_frame_bulk is usually stored/allocated on the function > call-stack to avoid locking penalties. > Current implementation considers only page_pool memory model. > > Suggested-by: Jesper Dangaard Brouer <brouer@xxxxxxxxxx> > Signed-off-by: Lorenzo Bianconi <lorenzo@xxxxxxxxxx> > --- > include/net/xdp.h | 11 ++++++++- > net/core/xdp.c | 61 +++++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 71 insertions(+), 1 deletion(-) I have a number of optimization improvements to this patch. Mostly related to simple likely/unlikely compiler annotations, that give better code layout for I-cache benefit. Details in[1]. (Lorenzo is informed) [1] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/xdp_bulk_return01.org -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer