> After trying various methods, I think that increase the size of tx > BD ring is simple and effective. Maybe the best resolution is that > allocate NAPI for each queue to improve the efficiency of the NAPI > callback, but this change is a bit big and I didn't try this method. > Perheps this method will be implemented in a future patch. How does this affect platforms like Vybrid with its fast Ethernet? Does the burst latency go up? > In addtion, this patch also updates the tx_stop_threshold and the > tx_wake_threshold of the tx ring. In previous logic, the value of > tx_stop_threshold is 217, however, the value of tx_wake_threshold > is 147, it does not make sense that tx_wake_threshold is less than > tx_stop_threshold. What do these actually mean? I could imagine that as the ring fills you don't want to stop until it is 217/512 full. There is then some hysteresis, such that it has to drop below 147/512 before more can be added? > Besides, both XDP path and 'slow path' share the > tx BD rings. So if tx_stop_threshold is 217, in the case of heavy > XDP traffic, the slow path is easily to be stopped, this will have > a serious impact on the slow path. Please post your iperf results for various platforms, so we can see the effects of this. We generally don't accept tuning patches without benchmarks which prove the improvements, and also show there is no regression. And given the wide variety of SoCs using the FEC, i expect testing on a number of SoCs, but Fast and 1G. Andrew