On Wed, May 22, 2024 at 08:48:24PM -0700, Shakeel Butt wrote: > Kernel test robot reported [1] performance regression for will-it-scale > test suite's page_fault2 test case for the commit 70a64b7919cb ("memcg: > dynamically allocate lruvec_stats"). After inspection it seems like the > commit has unintentionally introduced false cache sharing. > > After the commit the fields of mem_cgroup_per_node which get read on the > performance critical path share the cacheline with the fields which > get updated often on LRU page allocations or deallocations. This has > caused contention on that cacheline and the workloads which manipulates > a lot of LRU pages are regressed as reported by the test report. > > The solution is to rearrange the fields of mem_cgroup_per_node such that > the false sharing is eliminated. Let's move all the read only pointers > at the start of the struct, followed by memcg-v1 only fields and at the > end fields which get updated often. > > Experiment setup: Ran fallocate1, fallocate2, page_fault1, page_fault2 > and page_fault3 from the will-it-scale test suite inside a three level > memcg with /tmp mounted as tmpfs on two different machines, one a single > numa node and the other one, two node machine. > > $ ./[testcase]_processes -t $NR_CPUS -s 50 > > Results for single node, 52 CPU machine: > > Testcase base with-patch > > fallocate1 1031081 1431291 (38.80 %) > fallocate2 1029993 1421421 (38.00 %) > page_fault1 2269440 3405788 (50.07 %) > page_fault2 2375799 3572868 (50.30 %) > page_fault3 28641143 28673950 ( 0.11 %) > > Results for dual node, 80 CPU machine: > > Testcase base with-patch > > fallocate1 2976288 3641185 (22.33 %) > fallocate2 2979366 3638181 (22.11 %) > page_fault1 6221790 7748245 (24.53 %) > page_fault2 6482854 7847698 (21.05 %) > page_fault3 28804324 28991870 ( 0.65 %) > > Fixes: 70a64b7919cb ("memcg: dynamically allocate lruvec_stats") > Reported-by: kernel test robot <oliver.sang@xxxxxxxxx> > Closes: https://lore.kernel.org/oe-lkp/202405171353.b56b845-oliver.sang@xxxxxxxxx > Signed-off-by: Shakeel Butt <shakeel.butt@xxxxxxxxx> Reviewed-by: Roman Gushchin <roman.gushchin@xxxxxxxxx> Thanks!