Alexei Starovoitov wrote: > On Tue, Mar 03, 2020 at 12:46:58PM +0100, Jesper Dangaard Brouer wrote: > > The Intel based drivers (ixgbe + i40e) have implemented XDP with > > headroom 192 bytes and not the recommended 256 bytes defined by > > XDP_PACKET_HEADROOM. For generic-XDP, accept that this headroom > > is also a valid size. The reason is to fit two packets on a 4k page. The driver itself is fairly flexible at this point. I think we should reconsider pushing down the headroom required in the program metadata and configuring it at runtime. At the moment the drivers are wasting half a page for no good reason in most cases I suspect. What is the use case for >192B headroom? I've not found an actual user who has complained yet. Resurrecting an old debate here so probably doesn't need to stall this patch. > > > > Still for generic-XDP if headroom is less, still expand headroom to > > XDP_PACKET_HEADROOM as this is the default in most XDP drivers. > > > > Tested on ixgbe with xdp_rxq_info --skb-mode and --action XDP_DROP: > > - Before: 4,816,430 pps > > - After : 7,749,678 pps > > (Note that ixgbe in native mode XDP_DROP 14,704,539 pps) > > But why do we care about generic-XDP performance? Seems users should just use XDP proper on ixgbe and i40e its supported. > > Signed-off-by: Jesper Dangaard Brouer <brouer@xxxxxxxxxx> > > --- > > include/uapi/linux/bpf.h | 1 + > > net/core/dev.c | 4 ++-- > > 2 files changed, 3 insertions(+), 2 deletions(-) > > > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h > > index 906e9f2752db..14dc4f9fb3c8 100644 > > --- a/include/uapi/linux/bpf.h > > +++ b/include/uapi/linux/bpf.h > > @@ -3312,6 +3312,7 @@ struct bpf_xdp_sock { > > }; > > > > #define XDP_PACKET_HEADROOM 256 > > +#define XDP_PACKET_HEADROOM_MIN 192 > > why expose it in uapi? > > > /* User return codes for XDP prog type. > > * A valid XDP program must return one of these defined values. All other > > diff --git a/net/core/dev.c b/net/core/dev.c > > index 4770dde3448d..9c941cd38b13 100644 > > --- a/net/core/dev.c > > +++ b/net/core/dev.c > > @@ -4518,11 +4518,11 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb, > > return XDP_PASS; > > > > /* XDP packets must be linear and must have sufficient headroom > > - * of XDP_PACKET_HEADROOM bytes. This is the guarantee that also > > + * of XDP_PACKET_HEADROOM_MIN bytes. This is the guarantee that also > > * native XDP provides, thus we need to do it here as well. > > */ > > if (skb_cloned(skb) || skb_is_nonlinear(skb) || > > - skb_headroom(skb) < XDP_PACKET_HEADROOM) { > > + skb_headroom(skb) < XDP_PACKET_HEADROOM_MIN) { > > int hroom = XDP_PACKET_HEADROOM - skb_headroom(skb); > > this looks odd. It's comparing against 192, but doing math with 256. > I guess that's ok, but needs a clear comment. > How about just doing 'skb_headroom(skb) < 192' here. > Or #define 192 right before this function with a comment about ixgbe? Or just let ixgbe/i40e be slow? I guess I'm missing some context?