Re: [RFC/RFT v2 0/3] Introduce GRO support to cpumap codebase

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> From: Jesper Dangaard Brouer <hawk@xxxxxxxxxx>
> Date: Tue, 26 Nov 2024 18:12:27 +0100
> 
> > 
> > 
> > 
> > On 26/11/2024 18.02, Lorenzo Bianconi wrote:
> >>> From: Daniel Xu <dxu@xxxxxxxxx>
> >>> Date: Mon, 25 Nov 2024 16:56:49 -0600
> >>>
> >>>>
> >>>>
> >>>> On Mon, Nov 25, 2024, at 9:12 AM, Alexander Lobakin wrote:
> >>>>> From: Daniel Xu <dxu@xxxxxxxxx>
> >>>>> Date: Fri, 22 Nov 2024 17:10:06 -0700
> >>>>>
> >>>>>> Hi Olek,
> >>>>>>
> >>>>>> Here are the results.
> >>>>>>
> >>>>>> On Wed, Nov 13, 2024 at 03:39:13PM GMT, Daniel Xu wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>> On Tue, Nov 12, 2024, at 9:43 AM, Alexander Lobakin wrote:
> >>>>>
> >>>>> [...]
> >>>>>
> >>>>>> Baseline (again)
> >>>>>>
> >>>>>>     Transactions    Latency P50 (s)    Latency P90 (s)    Latency
> >>>>>> P99 (s)            Throughput (Mbit/s)
> >>>>>> Run 1    3169917            0.00007295    0.00007871   
> >>>>>> 0.00009343        Run 1    21749.43
> >>>>>> Run 2    3228290            0.00007103    0.00007679   
> >>>>>> 0.00009215        Run 2    21897.17
> >>>>>> Run 3    3226746            0.00007231    0.00007871   
> >>>>>> 0.00009087        Run 3    21906.82
> >>>>>> Run 4    3191258            0.00007231    0.00007743   
> >>>>>> 0.00009087        Run 4    21155.15
> >>>>>> Run 5    3235653            0.00007231    0.00007743   
> >>>>>> 0.00008703        Run 5    21397.06
> >>>>>> Average    3210372.8    0.000072182    0.000077814   
> >>>>>> 0.00009087        Average    21621.126
> >>>>>>
> >>>>>> cpumap v2 Olek
> >>>>>>
> >>>>>>     Transactions    Latency P50 (s)    Latency P90 (s)    Latency
> >>>>>> P99 (s)            Throughput (Mbit/s)
> >>>>>> Run 1    3253651            0.00007167    0.00007807   
> >>>>>> 0.00009343        Run 1    13497.57
> >>>>>> Run 2    3221492            0.00007231    0.00007743   
> >>>>>> 0.00009087        Run 2    12115.53
> >>>>>> Run 3    3296453            0.00007039    0.00007807   
> >>>>>> 0.00009087        Run 3    12323.38
> >>>>>> Run 4    3254460            0.00007167    0.00007807   
> >>>>>> 0.00009087        Run 4    12901.88
> >>>>>> Run 5    3173327            0.00007295    0.00007871   
> >>>>>> 0.00009215        Run 5    12593.22
> >>>>>> Average    3239876.6    0.000071798    0.00007807   
> >>>>>> 0.000091638        Average    12686.316
> >>>>>> Delta    0.92%            -0.53%            0.33%           
> >>>>>> 0.85%                    -41.32%
> >>>>>>
> >>>>>>
> >>>>>> It's very interesting that we see -40% tput w/ the patches. I went
> >>>>>> back
> >>>>>
> >>>>> Oh no, I messed up something =\
> >>>>>
> >>>>> Could you please also test not the whole series, but patches 1-3
> >>>>> (up to
> >>>>> "bpf:cpumap: switch to GRO...") and 1-4 (up to "bpf: cpumap: reuse skb
> >>>>> array...")? Would be great to see whether this implementation works
> >>>>> worse right from the start or I just broke something later on.
> >>>>
> >>>> Patches 1-3 reproduces the -40% tput numbers.
> >>>
> >>> Ok, thanks! Seems like using the hybrid approach (GRO, but on top of
> >>> cpumap's kthreads instead of NAPI) really performs worse than switching
> >>> cpumap to NAPI.
> >>>
> >>>>
> >>>> With patches 1-4 the numbers get slightly worse (~1gbps lower) but
> >>>> it was noisy.
> >>>
> >>> Interesting, I was sure patch 4 optimizes stuff... Maybe I'll give up
> >>> on it.
> >>>
> >>>>
> >>>> tcp_rr results were unaffected.
> >>>
> >>> @ Jakub,
> >>>
> >>> Looks like I can't just use GRO without Lorenzo's conversion to NAPI, at
> >>> least for now =\ I took a look on the backlog NAPI and it could be used,
> >>> although we'd need a pointer in the backlog to the corresponding cpumap
> >>> + also some synchronization point to make sure backlog NAPI won't access
> >>> already destroyed cpumap.
> >>>
> >>> Maybe Lorenzo could take a look...
> >>
> >> it seems to me the only difference would be we will use the shared
> >> backlog_napi
> >> kthreads instead of having a dedicated kthread for each cpumap entry
> >> but we still
> >> need the napi poll logic. I can look into it if you prefer the shared
> >> kthread
> >> approach.
> > 
> > I don't like a shared kthread approach. For my use-case I want to give
> > the "remote" CPU-map kthreads higher scheduling priority. (As it will be
> > running a 2nd XDP BPF DDoS program protecting against overload by
> > dropping packets).
> 
> Oh, that is also valid.
> Let's see what Jakub replies, for now I'm leaning towards posting
> approach from this RFC with my bulk allocation from the NAPI cache.

I guess it would be better to keep them separated to check what are the effects
of each change (GRO for cpumap and bulk allocation). I guess you can post your
changes on top of mine if we all agree the proposed approach is fine.
What do you think?

Regards,
Lorenzo

> 
> > 
> > Thus, I'm not a fan of using the shared backlog_napi.  As I don't want
> > to give backlog NAPI high priority, in my use-case.
> > 
> >> @Jakub: what do you think?
> > 
> > 
> > --Jesper
> 
> Thanks,
> Olek

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux