> On Fri, 4 Sep 2020 09:50:31 +0200 > Lorenzo Bianconi <lorenzo@xxxxxxxxxx> wrote: > > > > On Thu, Sep 03, 2020 at 10:58:50PM +0200, Lorenzo Bianconi wrote: > > > > +BPF_CALL_2(bpf_xdp_adjust_mb_header, struct xdp_buff *, xdp, > > > > + int, offset) > > > > +{ > > > > + void *data_hard_end, *data_end; > > > > + struct skb_shared_info *sinfo; > > > > + int frag_offset, frag_len; > > > > + u8 *addr; > > > > + > > > > + if (!xdp->mb) > > > > + return -EOPNOTSUPP; > > > > + > > > > + sinfo = xdp_get_shared_info_from_buff(xdp); > > > > + > > > > + frag_len = skb_frag_size(&sinfo->frags[0]); > > > > + if (offset > frag_len) > > > > + return -EINVAL; > > > > + > > > > + frag_offset = skb_frag_off(&sinfo->frags[0]); > > > > + data_end = xdp->data_end + offset; > > > > + > > > > + if (offset < 0 && (-offset > frag_offset || > > > > + data_end < xdp->data + ETH_HLEN)) > > > > + return -EINVAL; > > > > + > > > > + data_hard_end = xdp_data_hard_end(xdp); /* use xdp->frame_sz */ > > > > + if (data_end > data_hard_end) > > > > + return -EINVAL; > > > > + > > > > + addr = page_address(skb_frag_page(&sinfo->frags[0])) + frag_offset; > > > > + if (offset > 0) { > > > > + memcpy(xdp->data_end, addr, offset); > > > > + } else { > > > > + memcpy(addr + offset, xdp->data_end + offset, -offset); > > > > + memset(xdp->data_end + offset, 0, -offset); > > > > + } > > > > + > > > > + skb_frag_size_sub(&sinfo->frags[0], offset); > > > > + skb_frag_off_add(&sinfo->frags[0], offset); > > > > + xdp->data_end = data_end; > > > > + > > > > + return 0; > > > > +} > > > > > > wait a sec. Are you saying that multi buffer XDP actually should be skb based? > > > If that's what mvneta driver is doing that's fine, but that is not a > > > reasonable requirement to put on all other drivers. > > > > I did not got what you mean here. The xdp multi-buffer layout uses > > the skb_shared_info at the end of the first buffer to link subsequent > > frames [0] and we rely on skb_frag* utilities to set/read offset and > > length of subsequent buffers. > > Yes, for now the same layout as "skb_shared_info" is "reuse", but I > think we should think of this as "xdp_shared_info" instead, as how it > is used for XDP is going to divert from SKBs. We already discussed (in > conf call) that we could store the total len of "frags" here, to > simplify the other helper. I like this approach, at the end the first fragment we can have something like: struct xdp_shared_info { skb_frag_f frags[16]; int n_frags; int frag_len; }; or do you prefer to not use skb_frag_t struct? > > Using the skb_frag_* helper functions are misleading, and will make it > more difficult to divert from how SKB handle frags. What about > introducing xdp_frag_* wrappers? (what do others think?) I am fine with having some dedicated helpers. Anyway we need to construct the xdp_buff {} receiving the dma descriptors. Regards, Lorenzo > > > > > > [0] http://people.redhat.com/lbiancon/conference/NetDevConf2020-0x14/add-xdp-on-driver.html > > - XDP multi-buffers section (slide 40) > > -- > Best regards, > Jesper Dangaard Brouer > MSc.CS, Principal Kernel Engineer at Red Hat > LinkedIn: http://www.linkedin.com/in/brouer >
Attachment:
signature.asc
Description: PGP signature