This patchset reorganizes page_counter structures which helps to make memory cgroup and hugetlb cgroup structures smaller (10%-35% depending on the kernel configuration) and more cache-effective. It also eliminates useless tracking of protected memory usage when it's not needed. v2: - two page_counter structures per hugetlb cgroup instead of one - rebased to the current mm branch - many minor fixes and improvements v1: https://lore.kernel.org/lkml/20240503201835.2969707-1-roman.gushchin@xxxxxxxxx/T/#m77151ed83451a49132e29ef13d55e08b95ac867f Roman Gushchin (5): mm: memcg: don't call propagate_protected_usage() needlessly mm: page_counters: put page_counter_calculate_protection() under CONFIG_MEMCG mm: memcg: merge multiple page_counters into a single structure mm: page_counters: initialize usage using ATOMIC_LONG_INIT() macro mm: memcg: convert enum res_type to mem_counter_type include/linux/hugetlb.h | 4 +- include/linux/hugetlb_cgroup.h | 8 +- include/linux/memcontrol.h | 19 +-- include/linux/page_counter.h | 128 ++++++++++++++++---- mm/hugetlb.c | 14 +-- mm/hugetlb_cgroup.c | 150 +++++++++-------------- mm/memcontrol-v1.c | 154 ++++++++++-------------- mm/memcontrol-v1.h | 10 +- mm/memcontrol.c | 211 ++++++++++++++++----------------- mm/page_counter.c | 94 +++++++++------ 10 files changed, 403 insertions(+), 389 deletions(-) -- 2.46.0.rc1.232.g9752f9e123-goog