+Feng, Yin and Oliver On Sun, May 14, 2023 at 11:27 PM Zhang, Cathy <cathy.zhang@xxxxxxxxx> wrote: > > > > > -----Original Message----- > > From: Shakeel Butt <shakeelb@xxxxxxxxxx> > > Sent: Monday, May 15, 2023 12:13 PM > > To: Zhang, Cathy <cathy.zhang@xxxxxxxxx> > > Cc: Eric Dumazet <edumazet@xxxxxxxxxx>; Linux MM <linux- > > mm@xxxxxxxxx>; Cgroups <cgroups@xxxxxxxxxxxxxxx>; Paolo Abeni > > <pabeni@xxxxxxxxxx>; davem@xxxxxxxxxxxxx; kuba@xxxxxxxxxx; > > Brandeburg, Jesse <jesse.brandeburg@xxxxxxxxx>; Srinivas, Suresh > > <suresh.srinivas@xxxxxxxxx>; Chen, Tim C <tim.c.chen@xxxxxxxxx>; You, > > Lizhen <lizhen.you@xxxxxxxxx>; eric.dumazet@xxxxxxxxx; > > netdev@xxxxxxxxxxxxxxx > > Subject: Re: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a proper > > size > > > > On Sun, May 14, 2023 at 8:46 PM Zhang, Cathy <cathy.zhang@xxxxxxxxx> > > wrote: > > > > > > > > > > > > > -----Original Message----- > > > > From: Shakeel Butt <shakeelb@xxxxxxxxxx> > > > > Sent: Saturday, May 13, 2023 1:17 AM > > > > To: Zhang, Cathy <cathy.zhang@xxxxxxxxx> > > > > Cc: Shakeel Butt <shakeelb@xxxxxxxxxx>; Eric Dumazet > > > > <edumazet@xxxxxxxxxx>; Linux MM <linux-mm@xxxxxxxxx>; Cgroups > > > > <cgroups@xxxxxxxxxxxxxxx>; Paolo Abeni <pabeni@xxxxxxxxxx>; > > > > davem@xxxxxxxxxxxxx; kuba@xxxxxxxxxx; Brandeburg@xxxxxxxxxx; > > > > Brandeburg, Jesse <jesse.brandeburg@xxxxxxxxx>; Srinivas, Suresh > > > > <suresh.srinivas@xxxxxxxxx>; Chen, Tim C <tim.c.chen@xxxxxxxxx>; > > > > You, Lizhen <lizhen.you@xxxxxxxxx>; eric.dumazet@xxxxxxxxx; > > > > netdev@xxxxxxxxxxxxxxx > > > > Subject: Re: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as > > > > a proper size > > > > > > > > On Fri, May 12, 2023 at 05:51:40AM +0000, Zhang, Cathy wrote: > > > > > > > > > > > > > > [...] > > > > > > > > > > > > Thanks a lot. This tells us that one or both of following > > > > > > scenarios are > > > > > > happening: > > > > > > > > > > > > 1. In the softirq recv path, the kernel is processing packets > > > > > > from multiple memcgs. > > > > > > > > > > > > 2. The process running on the CPU belongs to memcg which is > > > > > > different from the memcgs whose packets are being received on that > > CPU. > > > > > > > > > > Thanks for sharing the points, Shakeel! Is there any trace records > > > > > you want to collect? > > > > > > > > > > > > > Can you please try the following patch and see if there is any > > improvement? > > > > > > Hi Shakeel, > > > > > > Try the following patch, the data of 'perf top' from system wide > > > indicates that the overhead of page_counter_cancel is dropped from 15.52% > > to 4.82%. > > > > > > Without patch: > > > 15.52% [kernel] [k] page_counter_cancel > > > 12.30% [kernel] [k] page_counter_try_charge > > > 11.97% [kernel] [k] try_charge_memcg > > > > > > With patch: > > > 10.63% [kernel] [k] page_counter_try_charge > > > 9.49% [kernel] [k] try_charge_memcg > > > 4.82% [kernel] [k] page_counter_cancel > > > > > > The patch is applied on the latest net-next/main: > > > befcc1fce564 ("sfc: fix use-after-free in > > > efx_tc_flower_record_encap_match()") > > > > > > > Thanks a lot Cathy for testing. Do you see any performance improvement for > > the memcached benchmark with the patch? > > Yep, absolutely :- ) RPS (with/without patch) = +1.74 Thanks a lot Cathy. Feng/Yin/Oliver, can you please test the patch at [1] with other workloads used by the test robot? Basically I wanted to know if it has any positive or negative impact on other perf benchmarks. [1] https://lore.kernel.org/all/20230512171702.923725-1-shakeelb@xxxxxxxxxx/ Thanks in advance.