RE: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a proper size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Shakeel Butt <shakeelb@xxxxxxxxxx>
> Sent: Monday, May 15, 2023 12:13 PM
> To: Zhang, Cathy <cathy.zhang@xxxxxxxxx>
> Cc: Eric Dumazet <edumazet@xxxxxxxxxx>; Linux MM <linux-
> mm@xxxxxxxxx>; Cgroups <cgroups@xxxxxxxxxxxxxxx>; Paolo Abeni
> <pabeni@xxxxxxxxxx>; davem@xxxxxxxxxxxxx; kuba@xxxxxxxxxx;
> Brandeburg, Jesse <jesse.brandeburg@xxxxxxxxx>; Srinivas, Suresh
> <suresh.srinivas@xxxxxxxxx>; Chen, Tim C <tim.c.chen@xxxxxxxxx>; You,
> Lizhen <lizhen.you@xxxxxxxxx>; eric.dumazet@xxxxxxxxx;
> netdev@xxxxxxxxxxxxxxx
> Subject: Re: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a proper
> size
> 
> On Sun, May 14, 2023 at 8:46 PM Zhang, Cathy <cathy.zhang@xxxxxxxxx>
> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Shakeel Butt <shakeelb@xxxxxxxxxx>
> > > Sent: Saturday, May 13, 2023 1:17 AM
> > > To: Zhang, Cathy <cathy.zhang@xxxxxxxxx>
> > > Cc: Shakeel Butt <shakeelb@xxxxxxxxxx>; Eric Dumazet
> > > <edumazet@xxxxxxxxxx>; Linux MM <linux-mm@xxxxxxxxx>; Cgroups
> > > <cgroups@xxxxxxxxxxxxxxx>; Paolo Abeni <pabeni@xxxxxxxxxx>;
> > > davem@xxxxxxxxxxxxx; kuba@xxxxxxxxxx; Brandeburg@xxxxxxxxxx;
> > > Brandeburg, Jesse <jesse.brandeburg@xxxxxxxxx>; Srinivas, Suresh
> > > <suresh.srinivas@xxxxxxxxx>; Chen, Tim C <tim.c.chen@xxxxxxxxx>;
> > > You, Lizhen <lizhen.you@xxxxxxxxx>; eric.dumazet@xxxxxxxxx;
> > > netdev@xxxxxxxxxxxxxxx
> > > Subject: Re: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as
> > > a proper size
> > >
> > > On Fri, May 12, 2023 at 05:51:40AM +0000, Zhang, Cathy wrote:
> > > >
> > > >
> > > [...]
> > > > >
> > > > > Thanks a lot. This tells us that one or both of following
> > > > > scenarios are
> > > > > happening:
> > > > >
> > > > > 1. In the softirq recv path, the kernel is processing packets
> > > > > from multiple memcgs.
> > > > >
> > > > > 2. The process running on the CPU belongs to memcg which is
> > > > > different from the memcgs whose packets are being received on that
> CPU.
> > > >
> > > > Thanks for sharing the points, Shakeel! Is there any trace records
> > > > you want to collect?
> > > >
> > >
> > > Can you please try the following patch and see if there is any
> improvement?
> >
> > Hi Shakeel,
> >
> > Try the following patch, the data of 'perf top' from system wide
> > indicates that the overhead of page_counter_cancel is dropped from 15.52%
> to 4.82%.
> >
> > Without patch:
> >     15.52%  [kernel]            [k] page_counter_cancel
> >     12.30%  [kernel]            [k] page_counter_try_charge
> >     11.97%  [kernel]            [k] try_charge_memcg
> >
> > With patch:
> >     10.63%  [kernel]            [k] page_counter_try_charge
> >      9.49%  [kernel]            [k] try_charge_memcg
> >      4.82%  [kernel]            [k] page_counter_cancel
> >
> > The patch is applied on the latest net-next/main:
> > befcc1fce564 ("sfc: fix use-after-free in
> > efx_tc_flower_record_encap_match()")
> >
> 
> Thanks a lot Cathy for testing. Do you see any performance improvement for
> the memcached benchmark with the patch?

Yep, absolutely :- ) RPS (with/without patch) = +1.74




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux