On Thu, 3 Dec 2020 10:25:41 -0800 sdf@xxxxxxxxxx wrote: > > +BPF_CALL_5(bpf_skb_check_mtu, struct sk_buff *, skb, > > + u32, ifindex, u32 *, mtu_len, s32, len_diff, u64, flags) > > +{ > > + int ret = BPF_MTU_CHK_RET_FRAG_NEEDED; > > + struct net_device *dev = skb->dev; > > + int len; > > + int mtu; > > + > > + if (flags & ~(BPF_MTU_CHK_SEGS)) > > + return -EINVAL; > > + > > + dev = __dev_via_ifindex(dev, ifindex); > > + if (!dev) > > + return -ENODEV; > > + > > + mtu = READ_ONCE(dev->mtu); > > + > > + /* TC len is L2, remove L2-header as dev MTU is L3 size */ > > [..] > > + len = skb->len - ETH_HLEN; > Any reason not to do s/ETH_HLEN/dev->hard_header_len/ (or min_header_len?) > thought this patch? Will fix in V9. There is a very small (performance) overhead, but mostly because net_device struct layout have placed mtu and hard_header_len on different cache-lines. (This is something that should be fixed separately). -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer