Re: [PATCH v3 bpf-next] net: veth: alloc skb in bulk for ndo_xdp_xmit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/29/21 11:04 PM, Lorenzo Bianconi wrote:
Split ndo_xdp_xmit and ndo_start_xmit use cases in veth_xdp_rcv routine
in order to alloc skbs in bulk for XDP_PASS verdict.
Introduce xdp_alloc_skb_bulk utility routine to alloc skb bulk list.
The proposed approach has been tested in the following scenario:
[...]
diff --git a/net/core/xdp.c b/net/core/xdp.c
index 0d2630a35c3e..05354976c1fc 100644
--- a/net/core/xdp.c
+++ b/net/core/xdp.c
@@ -514,6 +514,17 @@ void xdp_warn(const char *msg, const char *func, const int line)
  };
  EXPORT_SYMBOL_GPL(xdp_warn);
+int xdp_alloc_skb_bulk(void **skbs, int n_skb, gfp_t gfp)
+{
+	n_skb = kmem_cache_alloc_bulk(skbuff_head_cache, gfp,
+				      n_skb, skbs);

Applied, but one question I was wondering about when reading the kmem_cache_alloc_bulk()
code was whether it would be safer to simply test for kmem_cache_alloc_bulk() != n_skb
given it could potentially in future also alloc less objs than requested, but I presume
if such extension would get implemented then call-sites might need to indicate 'best
effort' somehow via flag instead (to handle < n_skb case). Either way all current callers
assume for != 0 that everything went well, so lgtm.

+	if (unlikely(!n_skb))
+		return -ENOMEM;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(xdp_alloc_skb_bulk);
+
  struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf,
  					   struct sk_buff *skb,
  					   struct net_device *dev)





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux