Re: [RFC/RFT v2 0/3] Introduce GRO support to cpumap codebase

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Alexander Lobakin <aleksander.lobakin@xxxxxxxxx>
Date: Thu, 5 Dec 2024 11:38:11 +0100

> From: Daniel Xu <dxu@xxxxxxxxx>
> Date: Wed, 04 Dec 2024 13:51:08 -0800
> 
>>
>>
>> On Wed, Dec 4, 2024, at 8:42 AM, Alexander Lobakin wrote:
>>> From: Jakub Kicinski <kuba@xxxxxxxxxx>
>>> Date: Tue, 3 Dec 2024 16:51:57 -0800
>>>
>>>> On Tue, 3 Dec 2024 12:01:16 +0100 Alexander Lobakin wrote:
>>>>>>> @ Jakub,  
>>>>>>
>>>>>> Context? What doesn't work and why?  
>>>>>
>>>>> My tests show the same perf as on Lorenzo's series, but I test with UDP
>>>>> trafficgen. Daniel tests TCP and the results are much worse than with
>>>>> Lorenzo's implementation.
>>>>> I suspect this is related to that how NAPI performs flushes / decides
>>>>> whether to repoll again or exit vs how kthread does that (even though I
>>>>> also try to flush only every 64 frames or when the ring is empty). Or
>>>>> maybe to that part of the kthread happens in process context outside any
>>>>> softirq, while when using NAPI, the whole loop is inside RX softirq.
>>>>>
>>>>> Jesper said that he'd like to see cpumap still using own kthread, so
>>>>> that its priority can be boosted separately from the backlog. That's why
>>>>> we asked you whether it would be fine to have cpumap as threaded NAPI in
>>>>> regards to all this :D
>>>>
>>>> Certainly not without a clear understanding what the problem with 
>>>> a kthread is.
>>>
>>> Yes, sure thing.
>>>
>>> Bad thing's that I can't reproduce Daniel's problem >_< Previously, I
>>> was testing with the UDP trafficgen and got up to 80% improvement over
>>> the baseline. Now I tested TCP and got up to 70% improvement, no
>>> regressions whatsoever =\
>>>
>>> I don't know where this regression on Daniel's setup comes from. Is it
>>> multi-thread or single-thread test? 
>>
>> 8 threads with 16 flows over them (-T8 -F16)
>>
>>> What app do you use: iperf, netperf,
>>> neper, Microsoft's app (forgot the name)?
>>
>> neper, tcp_stream.
> 
> Let me recheck with neper -T8 -F16, I'll post my results soon.

kernel     direct T1    direct T8F16    cpumap    cpumap T8F16
clean      28           51              13        9               Gbps
GRO        28           51              26        18              Gbps

100% gain, no regressions =\

My XDP prog is simple (upstream xdp-tools repo with no changes):

numactl -N 0 xdp-tools/xdp-bench/xdp-bench redirect-cpu -c 23 -s -p
no-touch ens802f0np0

IOW it simply redirects everything to CPU 23 (same NUMA node) from any
Rx queue without looking into headers or packet.
Do you test with more sophisticated XDP prog?

Thanks,
Olek




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux