On Tue, Jan 31, 2023 at 7:08 PM Alexander Lobakin <alexandr.lobakin@xxxxxxxxx> wrote: > > From: Jason Xing <kerneljasonxing@xxxxxxxxx> > Date: Tue, 31 Jan 2023 11:00:05 +0800 > > > On Mon, Jan 30, 2023 at 11:09 PM Maciej Fijalkowski > > <maciej.fijalkowski@xxxxxxxxx> wrote: > >> > >> On Fri, Jan 27, 2023 at 08:20:18PM +0800, Jason Xing wrote: > >>> From: Jason Xing <kernelxing@xxxxxxxxxxx> > >>> > >>> I encountered one case where I cannot increase the MTU size directly > >>> from 1500 to 2000 with XDP enabled if the server is equipped with > >>> IXGBE card, which happened on thousands of servers in production > >>> environment. > >> > > > >> You said in this thread that you've done several tests - what were they? > > > > Tests against XDP are running on the server side when MTU varies from > > 1500 to 3050 (not including ETH_HLEN, ETH_FCS_LEN and VLAN_HLEN) for a > > BTW, if ixgbe allows you to set MTU of 3050, it needs to be fixed. Intel > drivers at some point didn't include the second VLAN tag into account, Yes, I noticed that. It should be like "int new_frame_size = new_mtu + ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2)" instead of only one VLAN_HLEN, which is used to compute real size in ixgbe_change_mtu() function. I'm wondering if I could submit another patch to fix the issue you mentioned because the current patch tells a different issue. Does it make sense? If you're available, please help me review the v3 patch I've already sent to the mailing-list. Thanks anyway. The Link is https://lore.kernel.org/lkml/20230131032357.34029-1-kerneljasonxing@xxxxxxxxx/ . Thanks, Jason > thus it was possible to trigger issues on Q-in-Q setups. AICS, not all > of them were fixed. > > > few days. > > I choose the iperf tool to test the maximum throughput and observe the > > behavior when the machines are under greater pressure. Also, I use > > netperf to send different size packets to the server side with > > different modes (TCP_RR/_STREAM) applied. > [...] > > Thanks, > Olek