Re: [PATCH v3 net-next 2/2] xdp: add multi-buff support for xdp running in generic mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 4 Dec 2023 16:43:56 +0100 Lorenzo Bianconi wrote:
> yes, I was thinking about it actually.
> I run some preliminary tests to check if we are introducing any performance
> penalties or so.
> My setup relies on a couple of veth pairs and an eBPF program to perform
> XDP_REDIRECT from one pair to another one. I am running the program in xdp
> driver mode (not generic one).
> 
> v00 (NS:ns0 - 192.168.0.1/24) <---> (NS:ns1 - 192.168.0.2/24) v01    v10 (NS:ns1 - 192.168.1.1/24) <---> (NS:ns2 - 192.168.1.2/24) v11
> 
> v00: iperf3 client
> v11: iperf3 server
> 
> I am run the test with different MTU valeus (1500B, 8KB, 64KB)
> 
> net-next veth codebase:
> =======================
> - MTU  1500: iperf3 ~  4.37Gbps
> - MTU  8000: iperf3 ~  9.75Gbps
> - MTU 64000: iperf3 ~ 11.24Gbps
> 
> net-next veth codebase + page_frag_cache instead of page_pool:
> ==============================================================
> - MTU  1500: iperf3 ~  4.99Gbps (+14%)
> - MTU  8000: iperf3 ~  8.5Gbps  (-12%)
> - MTU 64000: iperf3 ~ 11.9Gbps  ( +6%)
> 
> It seems there is no a clear win situation of using page_pool or
> page_frag_cache. What do you think?

Hm, interesting. Are the iperf processes running on different cores?
May be worth pinning (both same and different) to make sure the cache
effects are isolated.




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux