> > How does this affect platforms like Vybrid with its fast Ethernet? > Sorry, I don't have the Vybrid platform, but I think I don't think it has much > impact, at most it just takes up some more memory. It has been 6 months since the page pool patches were posted and i asked about benchmark results for other platforms like Vybrid. Is it so hard to get hold or reference platforms? Fugang Duan used to have a test farm of all sorts of boards and reported to me the regressions i introduced with MDIO changes and PM changes. As somebody who seems to be an NXP FEC Maintainer i would expect you to have access to a range of hardware. Especially since XDP and eBPF is a bit of a niche for embedded processes which NXP produce. You want to be sure your changes don't regress the main use cases which i guess are plain networking. > > Does the burst latency go up? > No, for fec, when a packet is attached to the BDs, the software will immediately > trigger the hardware to send the packet. In addition, I think it may improve the > latency, because the size of the tx ring becomes larger, and more packets can be > attached to the BD ring for burst traffic. And a bigger burst means more latency. Read about Buffer bloat. While you have iperf running saturating the link, try a ping as well. How does ping latency change with more TX buffers? Ideally you want enough transmit buffers to keep the link full, but not more. If the driver is using BQL, the network stack will help with this. > Below are the results on i.MX6UL/8MM/8MP/8ULP/93 platforms, i.MX6UL and > 8ULP only support Fast ethernet. Others support 1G. Thanks for the benchmark numbers. Please get into the habit of including them. We like to see justification for any sort of performance tweaks. Andrew