On Tue, Jul 16, 2024 at 03:53:25PM +0800, Oliver Sang wrote: > hi, Roman, > > On Mon, Jul 15, 2024 at 10:18:39PM +0000, Roman Gushchin wrote: > > On Mon, Jul 15, 2024 at 10:14:31PM +0800, Oliver Sang wrote: > > > hi, Roman Gushchin, > > > > > > On Fri, Jul 12, 2024 at 07:03:31PM +0000, Roman Gushchin wrote: > > > > On Fri, Jul 12, 2024 at 02:04:48PM +0800, kernel test robot wrote: > > > > > > > > > > > > > > > Hello, > > > > > > > > > > kernel test robot noticed a -29.4% regression of aim7.jobs-per-min on: > > > > > > > > > > > > > > > commit: 98c9daf5ae6be008f78c07b744bcff7bcc6e98da ("mm: memcg: guard memcg1-specific members of struct mem_cgroup_per_node") > > > > > https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master > > > > > > > > Hello, > > > > > > > > thank you for the report! > > > > > > > > I'd expect that the regression should be fixed by the commit > > > > "mm: memcg: add cache line padding to mem_cgroup_per_node". > > > > > > > > Can you, please, confirm that it's not the case? > > > > > > > > Thank you! > > > > > > in our this aim7 test, we found the performance partially recovered by > > > "mm: memcg: add cache line padding to mem_cgroup_per_node" but not fully > > > > Thank you for providing the detailed information! > > > > Can you, please, check if the following patch resolves the regression entirely? > > no. in our tests, the following patch has little impact. > I directly apply it upon 6df13230b6 (if this is not the proper applyment, please > let me know, thanks) Hm, interesting. And thank you for the confirmation, you did everything correct. Because the only thing the original patch did was a removal of few fields from the mem_cgroup_per_node struct, there are not many options left here. Would you mind to try the following patch? Thank you and really appreciate your help! diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 7e2eb091049a..0e5bf25d324f 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -109,6 +109,7 @@ struct mem_cgroup_per_node { /* Fields which get updated often at the end. */ struct lruvec lruvec; + CACHELINE_PADDING(_pad2_); unsigned long lru_zone_size[MAX_NR_ZONES][NR_LRU_LISTS]; struct mem_cgroup_reclaim_iter iter; };