On Tue, Oct 13, 2020 at 04:07:26PM -0700, Jakub Kicinski wrote: > On Tue, 13 Oct 2020 22:40:09 +0200 Jesper Dangaard Brouer wrote: > > > FWIW I took a quick swing at testing it with the HW I have and it did > > > exactly what hardware should do. The TX unit entered an error state > > > and then the driver detected that and reset it a few seconds later. > > > > The drivers (i40e, mlx5, ixgbe) I tested with didn't entered an error > > state, when getting packets exceeding the MTU. I didn't go much above > > 4K, so maybe I didn't trigger those cases. > > You probably need to go above 16k to get out of the acceptable jumbo > frame size. I tested ixgbe by converting TSO frames to large TCP frames, > at low probability. how about we set __bpf_skb_max_len() to jumbo like 8k and be done with it. I guess some badly written driver/fw may still hang with <= 8k skb that bpf redirected from one netdev with mtu=jumbo to another netdev with mtu=1500, but then it's really a job of the driver/fw to deal with it cleanly. I think checking skb->tx_dev->mtu for every xmited packet is not great. For typical load balancer it would be good to have MRU 1500 and MTU 15xx. Especially if it's internet facing. Just to drop all known big packets in hw via MRU check. But the stack doesn't have MRU vs MTU distinction and XDP_TX doesn't adhere to MTU. xdp_data_hard_end is the limit. So xdp already allows growing the packet beyond MTU. I think upgrading artificial limit in __bpf_skb_max_len() to 8k will keep it safe enough for all practical cases and will avoid unnecessary checks and complexity in xmit path.