Re: [RFC net-next] net: veth: reduce page_pool memory footprint using half page per-buffer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 17 May 2023 00:52:25 +0200 Lorenzo Bianconi wrote:
> I am testing this RFC patch in the scenario reported below:
> 
> iperf tcp tx --> veth0 --> veth1 (xdp_pass) --> iperf tcp rx
> 
> - 6.4.0-rc1 net-next:
>   MTU 1500B: ~ 7.07 Gbps
>   MTU 8000B: ~ 14.7 Gbps
> 
> - 6.4.0-rc1 net-next + page_pool frag support in veth:
>   MTU 1500B: ~ 8.57 Gbps
>   MTU 8000B: ~ 14.5 Gbps
> 
> side note: it seems there is a regression between 6.2.15 and 6.4.0-rc1 net-next
> (even without latest veth page_pool patches) in the throughput I can get in the
> scenario above, but I have not looked into it yet.
> 
> - 6.2.15:
>   MTU 1500B: ~ 7.91 Gbps
>   MTU 8000B: ~ 14.1 Gbps
> 
> - 6.4.0-rc1 net-next w/o commits [0],[1],[2]
>   MTU 1500B: ~ 6.38 Gbps
>   MTU 8000B: ~ 13.2 Gbps

If the benchmark is iperf, wouldn't working towards preserving GSO
status across XDP (assuming prog is multi-buf-capable) be the most
beneficial optimization?




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux