On Thu, 4 Feb 2021 01:14:56 +0100 Daniel Borkmann <daniel@xxxxxxxxxxxxx> wrote: > On 1/29/21 11:04 PM, Lorenzo Bianconi wrote: > > Split ndo_xdp_xmit and ndo_start_xmit use cases in veth_xdp_rcv routine > > in order to alloc skbs in bulk for XDP_PASS verdict. > > Introduce xdp_alloc_skb_bulk utility routine to alloc skb bulk list. > > The proposed approach has been tested in the following scenario: > [...] > > diff --git a/net/core/xdp.c b/net/core/xdp.c > > index 0d2630a35c3e..05354976c1fc 100644 > > --- a/net/core/xdp.c > > +++ b/net/core/xdp.c > > @@ -514,6 +514,17 @@ void xdp_warn(const char *msg, const char *func, const int line) > > }; > > EXPORT_SYMBOL_GPL(xdp_warn); > > > > +int xdp_alloc_skb_bulk(void **skbs, int n_skb, gfp_t gfp) > > +{ > > + n_skb = kmem_cache_alloc_bulk(skbuff_head_cache, gfp, > > + n_skb, skbs); > > Applied, but one question I was wondering about when reading the kmem_cache_alloc_bulk() > code was whether it would be safer to simply test for kmem_cache_alloc_bulk() != n_skb > given it could potentially in future also alloc less objs than requested, but I presume > if such extension would get implemented then call-sites might need to indicate 'best > effort' somehow via flag instead (to handle < n_skb case). Either way all current callers > assume for != 0 that everything went well, so lgtm. It was Andrew (AKPM) that wanted the API to either return the requested number of objects or fail. I respected the MM-maintainers request at that point, even-though I wanted the other API as there is a small performance advantage (not crossing page boundary in SLUB). At that time we discussed it on MM-list, and I see his/the point: If API can allocate less objs than requested, then think about how this complicated the surrounding code. E.g. in this specific code we already have VETH_XDP_BATCH(16) xdp_frame objects, which we need to get 16 SKB objects for. What should the code do if it cannot get 16 SKBs(?). -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer