The 11/10/2022 17:21, Alexander Lobakin wrote: Hi, > > From: Andrew Lunn <andrew@xxxxxxx> > Date: Thu, 10 Nov 2022 14:57:35 +0100 > > > > Nice stuff! I hear time to time that XDP is for 10G+ NICs only, but > > > I'm not a fan of such, and this series proves once again XDP fits > > > any hardware ^.^ > > > > The Freescale FEC recently gained XDP support. Many variants of it are > > Fast Ethernet only. > > > > What i found most interesting about that patchset was that the use of > > the page_ppol API made the driver significantly faster for the general > > case as well as XDP. > > The driver didn't have any page recycling or page splitting logics, > while Page Pool recycles even pages from skbs if > skb_mark_for_recycle() is used, which is the case here. So it > significantly reduced the number of new page allocations for Rx, if > there still are any at all. > Plus, Page Pool allocates pages by bulks (of 16 IIRC), not one by > one, that reduces CPU overhead as well. Just to make sure that everything is clear, those results that I have shown in the cover letter are without any XDP programs on the interfaces. Because I thought that is the correct comparison of the results before and after all these changes. Once I add an XDP program on the interface the performance drops. The program will look for some ether types and always return XDP_PASS. These are the results when I have such a XDP program on the interface: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.01 sec 486 MBytes 408 Mbits/sec 0 sender [ 5] 0.00-10.00 sec 483 MBytes 405 Mbits/sec receiver > > > > > Andrew > > Thanks, > Olek -- /Horatiu