The patch titled Subject: mm/list_lru: don't pass unnecessary key parameters has been added to the -mm mm-unstable branch. Its filename is mm-list_lru-dont-pass-unnecessary-key-parameters.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-list_lru-dont-pass-unnecessary-key-parameters.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Kairui Song <kasong@xxxxxxxxxxx> Subject: mm/list_lru: don't pass unnecessary key parameters Date: Thu, 26 Sep 2024 01:10:15 +0800 Patch series "Split list_lru lock into per-cgroup scope", v2. Currently, every list_lru has a per-node lock that protects adding, deletion, isolation, and reparenting of all list_lru_one instances belonging to this list_lru on this node. This lock contention is heavy when multiple cgroups modify the same list_lru. This can be alleviated by splitting the lock into per-cgroup scope. To achieve this, this series reworked and optimized the reparenting process step by step, making it possible to have a stable list_lru_one, and making it possible to pin the list_lru_one. Then split the lock into per-cgroup scope. The result is ~15% performance gain for simple multi-cgroup tar test of small files, and reduced LOC. See PATCH 5/6 for test details. This patch (of 6): When LOCKDEP is not enabled, lock_class_key is an empty struct that is never used. But the list_lru initialization function still takes a placeholder pointer as parameter, and the compiler cannot optimize it because the function is not static and exported. Remove this parameter and move it inside the list_lru struct. Only use it when LOCKDEP is enabled. Kernel builds with LOCKDEP will be slightly larger, while !LOCKDEP builds without it will be slightly smaller (the common case). Link: https://lkml.kernel.org/r/20240925171020.32142-1-ryncsn@xxxxxxxxx Link: https://lkml.kernel.org/r/20240925171020.32142-2-ryncsn@xxxxxxxxx Signed-off-by: Kairui Song <kasong@xxxxxxxxxxx> Cc: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Muchun Song <muchun.song@xxxxxxxxx> Cc: Qi Zheng <zhengqi.arch@xxxxxxxxxxxxx> Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx> Cc: Shakeel Butt <shakeel.butt@xxxxxxxxx> Cc: Waiman Long <longman@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/list_lru.h | 18 +++++++++++++++--- mm/list_lru.c | 9 +++++---- mm/workingset.c | 4 ++-- 3 files changed, 22 insertions(+), 9 deletions(-) --- a/include/linux/list_lru.h~mm-list_lru-dont-pass-unnecessary-key-parameters +++ a/include/linux/list_lru.h @@ -56,16 +56,28 @@ struct list_lru { bool memcg_aware; struct xarray xa; #endif +#ifdef CONFIG_LOCKDEP + struct lock_class_key *key; +#endif }; void list_lru_destroy(struct list_lru *lru); int __list_lru_init(struct list_lru *lru, bool memcg_aware, - struct lock_class_key *key, struct shrinker *shrinker); + struct shrinker *shrinker); #define list_lru_init(lru) \ - __list_lru_init((lru), false, NULL, NULL) + __list_lru_init((lru), false, NULL) #define list_lru_init_memcg(lru, shrinker) \ - __list_lru_init((lru), true, NULL, shrinker) + __list_lru_init((lru), true, shrinker) + +static inline int list_lru_init_memcg_key(struct list_lru *lru, struct shrinker *shrinker, + struct lock_class_key *key) +{ +#ifdef CONFIG_LOCKDEP + lru->key = key; +#endif + return list_lru_init_memcg(lru, shrinker); +} int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, gfp_t gfp); --- a/mm/list_lru.c~mm-list_lru-dont-pass-unnecessary-key-parameters +++ a/mm/list_lru.c @@ -562,8 +562,7 @@ static void memcg_destroy_list_lru(struc } #endif /* CONFIG_MEMCG */ -int __list_lru_init(struct list_lru *lru, bool memcg_aware, - struct lock_class_key *key, struct shrinker *shrinker) +int __list_lru_init(struct list_lru *lru, bool memcg_aware, struct shrinker *shrinker) { int i; @@ -583,8 +582,10 @@ int __list_lru_init(struct list_lru *lru for_each_node(i) { spin_lock_init(&lru->node[i].lock); - if (key) - lockdep_set_class(&lru->node[i].lock, key); +#ifdef CONFIG_LOCKDEP + if (lru->key) + lockdep_set_class(&lru->node[i].lock, lru->key); +#endif init_one_lru(&lru->node[i].lru); } --- a/mm/workingset.c~mm-list_lru-dont-pass-unnecessary-key-parameters +++ a/mm/workingset.c @@ -823,8 +823,8 @@ static int __init workingset_init(void) if (!workingset_shadow_shrinker) goto err; - ret = __list_lru_init(&shadow_nodes, true, &shadow_nodes_key, - workingset_shadow_shrinker); + ret = list_lru_init_memcg_key(&shadow_nodes, workingset_shadow_shrinker, + &shadow_nodes_key); if (ret) goto err_list_lru; _ Patches currently in -mm which might be from kasong@xxxxxxxxxxx are mm-list_lru-dont-pass-unnecessary-key-parameters.patch mm-list_lru-dont-export-list_lru_add.patch mm-list_lru-code-clean-up-for-reparenting.patch mm-list_lru-simplify-reparenting-and-initial-allocation.patch mm-list_lru-split-the-lock-to-per-cgroup-scope.patch mm-list_lru-simplify-the-list_lru-walk-callback-function.patch