The patch titled Subject: mm-memcg-prevent-memoryoom_control-load-store-tearing-v3 has been added to the -mm mm-unstable branch. Its filename is mm-memcg-prevent-memoryoom_control-load-store-tearing-v3.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-memcg-prevent-memoryoom_control-load-store-tearing-v3.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Yue Zhao <findns94@xxxxxxxxx> Subject: mm-memcg-prevent-memoryoom_control-load-store-tearing-v3 Date: Thu, 9 Mar 2023 00:25:54 +0800 Add [WRITE|READ]_ONCE for all occurrences of memcg->oom_kill_disable, memcg->swappiness and memcg->soft_limit Link: https://lkml.kernel.org/r/20230308162555.14195-4-findns94@xxxxxxxxx Signed-off-by: Yue Zhao <findns94@xxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Muchun Song <muchun.song@xxxxxxxxx> Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx> Cc: Shakeel Butt <shakeelb@xxxxxxxxxx> Cc: Tang Yizhou <tangyeechou@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- --- a/mm/memcontrol.c~mm-memcg-prevent-memoryoom_control-load-store-tearing-v3 +++ a/mm/memcontrol.c @@ -1929,7 +1929,7 @@ static bool mem_cgroup_oom(struct mem_cg * Please note that mem_cgroup_out_of_memory might fail to find a * victim and then we have to bail out from the charge path. */ - if (memcg->oom_kill_disable) { + if (READ_ONCE(memcg->oom_kill_disable)) { if (current->in_user_fault) { css_get(&memcg->css); current->memcg_in_oom = memcg; @@ -1999,7 +1999,7 @@ bool mem_cgroup_oom_synchronize(bool han if (locked) mem_cgroup_oom_notify(memcg); - if (locked && !memcg->oom_kill_disable) { + if (locked && !READ_ONCE(memcg->oom_kill_disable)) { mem_cgroup_unmark_under_oom(memcg); finish_wait(&memcg_oom_waitq, &owait.wait); mem_cgroup_out_of_memory(memcg, current->memcg_oom_gfp_mask, @@ -5354,7 +5354,7 @@ mem_cgroup_css_alloc(struct cgroup_subsy page_counter_set_high(&memcg->swap, PAGE_COUNTER_MAX); if (parent) { WRITE_ONCE(memcg->swappiness, mem_cgroup_swappiness(parent)); - memcg->oom_kill_disable = parent->oom_kill_disable; + WRITE_ONCE(memcg->oom_kill_disable, READ_ONCE(parent->oom_kill_disable)); page_counter_init(&memcg->memory, &parent->memory); page_counter_init(&memcg->swap, &parent->swap); _ Patches currently in -mm which might be from findns94@xxxxxxxxx are mm-memcg-prevent-memoryoomgroup-load-store-tearing.patch mm-memcg-prevent-memoryswappiness-load-store-tearing.patch mm-memcg-prevent-memoryswappiness-load-store-tearing-v3.patch mm-memcg-prevent-memoryoom_control-load-store-tearing.patch mm-memcg-prevent-memoryoom_control-load-store-tearing-v3.patch mm-memcg-prevent-memorysoft_limit_in_bytes-load-store-tearing.patch mm-memcg-prevent-memorysoft_limit_in_bytes-load-store-tearing-v3.patch