On Mon, Dec 21, 2020 at 9:37 AM Jesper Dangaard Brouer <brouer@xxxxxxxxxx> wrote: > > On Sat, 19 Dec 2020 18:55:00 +0100 > Lorenzo Bianconi <lorenzo@xxxxxxxxxx> wrote: > > > diff --git a/include/net/xdp.h b/include/net/xdp.h > > index 11ec93f827c0..323340caef88 100644 > > --- a/include/net/xdp.h > > +++ b/include/net/xdp.h > > @@ -76,6 +76,13 @@ struct xdp_buff { > > u32 frame_sz; /* frame size to deduce data_hard_end/reserved tailroom*/ > > }; > > > > +static __always_inline void > > +xdp_init_buff(struct xdp_buff *xdp, u32 frame_sz, struct xdp_rxq_info *rxq) > > +{ > > + xdp->frame_sz = frame_sz; > > + xdp->rxq = rxq; > > Later you will add 'xdp->mb = 0' here. right > > > +} > > Via the names of your functions, I assume that xdp_init_buff() is > called before xdp_prepare_buff(), right? > (And your pending 'xdp->mb = 0' also prefer this.) > > Below in bpf_prog_test_run_xdp() and netif_receive_generic_xdp() you > violate this order... which will give you headaches when implementing > the multi-buff support. It is also a bad example for driver developer > that need to figure out this calling-order from the function names. > > Below, will it be possible to have 'init' before 'prepare'? > ack, right. Looking at the code we can fix it and have xdp_init_buff() before xdp_prepare_buff(). I will fix it in v5. Regards, Lorenzo > > > + > > /* Reserve memory area at end-of data area. > > * > > * This macro reserves tailroom in the XDP buffer by limiting the > > diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c > > index c1c30a9f76f3..a8fa5a9e4137 100644 > > --- a/net/bpf/test_run.c > > +++ b/net/bpf/test_run.c > > @@ -640,10 +640,10 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr, > > xdp.data = data + headroom; > > xdp.data_meta = xdp.data; > > xdp.data_end = xdp.data + size; > > - xdp.frame_sz = headroom + max_data_sz + tailroom; > > > > rxqueue = __netif_get_rx_queue(current->nsproxy->net_ns->loopback_dev, 0); > > - xdp.rxq = &rxqueue->xdp_rxq; > > + xdp_init_buff(&xdp, headroom + max_data_sz + tailroom, > > + &rxqueue->xdp_rxq); > > bpf_prog_change_xdp(NULL, prog); > > ret = bpf_test_run(prog, &xdp, repeat, &retval, &duration, true); > > if (ret) > > diff --git a/net/core/dev.c b/net/core/dev.c > > index a46334906c94..b1a765900c01 100644 > > --- a/net/core/dev.c > > +++ b/net/core/dev.c > > @@ -4588,11 +4588,11 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb, > > struct netdev_rx_queue *rxqueue; > > void *orig_data, *orig_data_end; > > u32 metalen, act = XDP_DROP; > > + u32 mac_len, frame_sz; > > __be16 orig_eth_type; > > struct ethhdr *eth; > > bool orig_bcast; > > int hlen, off; > > - u32 mac_len; > > > > /* Reinjected packets coming from act_mirred or similar should > > * not get XDP generic processing. > > @@ -4631,8 +4631,8 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb, > > xdp->data_hard_start = skb->data - skb_headroom(skb); > > > > /* SKB "head" area always have tailroom for skb_shared_info */ > > - xdp->frame_sz = (void *)skb_end_pointer(skb) - xdp->data_hard_start; > > - xdp->frame_sz += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); > > + frame_sz = (void *)skb_end_pointer(skb) - xdp->data_hard_start; > > + frame_sz += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); > > > > orig_data_end = xdp->data_end; > > orig_data = xdp->data; > > @@ -4641,7 +4641,7 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb, > > orig_eth_type = eth->h_proto; > > > > rxqueue = netif_get_rxqueue(skb); > > - xdp->rxq = &rxqueue->xdp_rxq; > > + xdp_init_buff(xdp, frame_sz, &rxqueue->xdp_rxq); > > > > act = bpf_prog_run_xdp(xdp_prog, xdp); > > > > -- > Best regards, > Jesper Dangaard Brouer > MSc.CS, Principal Kernel Engineer at Red Hat > LinkedIn: http://www.linkedin.com/in/brouer >