The quilt patch titled Subject: mm: memcg: fix struct memcg_vmstats_percpu size and alignment has been removed from the -mm tree. Its filename was mm-memcg-optimize-parent-iteration-in-memcg_rstat_updated-fix.patch This patch was dropped because it was folded into mm-memcg-optimize-parent-iteration-in-memcg_rstat_updated.patch ------------------------------------------------------ From: Yosry Ahmed <yosryahmed@xxxxxxxxxx> Subject: mm: memcg: fix struct memcg_vmstats_percpu size and alignment Date: Sat, 3 Feb 2024 04:46:12 +0000 Commit da10d7e140196 ("mm: memcg: optimize parent iteration in memcg_rstat_updated()") added two additional pointers to the end of struct memcg_vmstats_percpu with CACHELINE_PADDING to put them in a separate cacheline. This caused the struct size to increase from 1200 to 1280 on my config (80 extra bytes instead of 16). Upon revisiting, the relevant struct members do not need to be on a separate cacheline, they just need to fit in a single one. This is a percpu struct, so there shouldn't be any contention on that cacheline anyway. Move the members to the beginning of the struct and make sure the struct itself is cacheline aligned. Add a comment about the members that need to fit together in a cacheline. The struct size is now 1216 on my config with this change. Link: https://lkml.kernel.org/r/20240203044612.1234216-1-yosryahmed@xxxxxxxxxx Fixes: da10d7e14019 ("mm: memcg: optimize parent iteration in memcg_rstat_updated()") Signed-off-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx> Reported-by: Greg Thelen <gthelen@xxxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Muchun Song <muchun.song@xxxxxxxxx> Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx> Cc: Shakeel Butt <shakeelb@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/memcontrol.c | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) --- a/mm/memcontrol.c~mm-memcg-optimize-parent-iteration-in-memcg_rstat_updated-fix +++ a/mm/memcontrol.c @@ -621,6 +621,15 @@ static inline int memcg_events_index(enu } struct memcg_vmstats_percpu { + /* Stats updates since the last flush */ + unsigned int stats_updates; + + /* Cached pointers for fast iteration in memcg_rstat_updated() */ + struct memcg_vmstats_percpu *parent; + struct memcg_vmstats *vmstats; + + /* The above should fit a single cacheline for memcg_rstat_updated() */ + /* Local (CPU and cgroup) page state & events */ long state[MEMCG_NR_STAT]; unsigned long events[NR_MEMCG_EVENTS]; @@ -632,17 +641,7 @@ struct memcg_vmstats_percpu { /* Cgroup1: threshold notifications & softlimit tree updates */ unsigned long nr_page_events; unsigned long targets[MEM_CGROUP_NTARGETS]; - - /* Fit members below in a single cacheline for memcg_rstat_updated() */ - CACHELINE_PADDING(_pad1_); - - /* Stats updates since the last flush */ - unsigned int stats_updates; - - /* Cached pointers for fast iteration in memcg_rstat_updated() */ - struct memcg_vmstats_percpu *parent; - struct memcg_vmstats *vmstats; -}; +} ____cacheline_aligned; struct memcg_vmstats { /* Aggregated (CPU and subtree) page state & events */ _ Patches currently in -mm which might be from yosryahmed@xxxxxxxxxx are mm-memcg-optimize-parent-iteration-in-memcg_rstat_updated.patch mm-zswap-fix-missing-folio-cleanup-in-writeback-race-path.patch mm-swap-enforce-updating-inuse_pages-at-the-end-of-swap_range_free.patch mm-zswap-remove-unnecessary-trees-cleanups-in-zswap_swapoff.patch mm-zswap-remove-unused-tree-argument-in-zswap_entry_put.patch x86-mm-delete-unused-cpu-argument-to-leave_mm.patch x86-mm-clarify-prev-usage-in-switch_mm_irqs_off.patch