The patch titled Subject: mm-list_lru-fix-uaf-for-memory-cgroup-v2 has been added to the -mm mm-hotfixes-unstable branch. Its filename is mm-list_lru-fix-uaf-for-memory-cgroup-v2.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-list_lru-fix-uaf-for-memory-cgroup-v2.patch This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Muchun Song <songmuchun@xxxxxxxxxxxxx> Subject: mm-list_lru-fix-uaf-for-memory-cgroup-v2 Date: Thu, 1 Aug 2024 10:46:03 +0800 only grab rcu lock when necessary, per Vlastimil Link: https://lkml.kernel.org/r/20240801024603.1865-1-songmuchun@xxxxxxxxxxxxx Fixes: 0a97c01cd20b ("list_lru: allow explicit memcg and NUMA node selection") Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> Acked-by: Shakeel Butt <shakeel.butt@xxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Nhat Pham <nphamcs@xxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/list_lru.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) --- a/mm/list_lru.c~mm-list_lru-fix-uaf-for-memory-cgroup-v2 +++ a/mm/list_lru.c @@ -112,12 +112,14 @@ bool list_lru_add_obj(struct list_lru *l { bool ret; int nid = page_to_nid(virt_to_page(item)); - struct mem_cgroup *memcg; - rcu_read_lock(); - memcg = list_lru_memcg_aware(lru) ? mem_cgroup_from_slab_obj(item) : NULL; - ret = list_lru_add(lru, item, nid, memcg); - rcu_read_unlock(); + if (list_lru_memcg_aware(lru)) { + rcu_read_lock(); + ret = list_lru_add(lru, item, nid, mem_cgroup_from_slab_obj(item)); + rcu_read_unlock(); + } else { + ret = list_lru_add(lru, item, nid, NULL); + } return ret; } @@ -148,12 +150,14 @@ bool list_lru_del_obj(struct list_lru *l { bool ret; int nid = page_to_nid(virt_to_page(item)); - struct mem_cgroup *memcg; - rcu_read_lock(); - memcg = list_lru_memcg_aware(lru) ? mem_cgroup_from_slab_obj(item) : NULL; - ret = list_lru_del(lru, item, nid, memcg); - rcu_read_unlock(); + if (list_lru_memcg_aware(lru)) { + rcu_read_lock(); + ret = list_lru_del(lru, item, nid, mem_cgroup_from_slab_obj(item)); + rcu_read_unlock(); + } else { + ret = list_lru_del(lru, item, nid, NULL); + } return ret; } _ Patches currently in -mm which might be from songmuchun@xxxxxxxxxxxxx are mm-list_lru-fix-uaf-for-memory-cgroup.patch mm-list_lru-fix-uaf-for-memory-cgroup-v2.patch mm-kmem-remove-mem_cgroup_from_obj.patch