The patch titled Subject: mm/vmstat: use xchg in cpu_vm_stats_fold has been added to the -mm mm-unstable branch. Its filename is mm-vmstat-use-xchg-in-cpu_vm_stats_fold.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-vmstat-use-xchg-in-cpu_vm_stats_fold.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Marcelo Tosatti <mtosatti@xxxxxxxxxx> Subject: mm/vmstat: use xchg in cpu_vm_stats_fold Date: Fri, 03 Mar 2023 16:58:50 -0300 In preparation to switch vmstat shepherd to flush per-CPU counters remotely, use xchg instead of a pair of read/write instructions. Link: https://lkml.kernel.org/r/20230303195908.977788434@xxxxxxxxxx Signed-off-by: Marcelo Tosatti <mtosatti@xxxxxxxxxx> Cc: Aaron Tomlin <atomlin@xxxxxxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Frederic Weisbecker <frederic@xxxxxxxxxx> Cc: Heiko Carstens <hca@xxxxxxxxxxxxx> Cc: "H. Peter Anvin" <hpa@xxxxxxxxx> Cc: Huacai Chen <chenhuacai@xxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxx> Cc: Peter Xu <peterx@xxxxxxxxxx> Cc: "Russell King (Oracle)" <linux@xxxxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- --- a/mm/vmstat.c~mm-vmstat-use-xchg-in-cpu_vm_stats_fold +++ a/mm/vmstat.c @@ -883,7 +883,7 @@ static int refresh_cpu_vm_stats(void) } /* - * Fold the data for an offline cpu into the global array. + * Fold the data for a cpu into the global array. * There cannot be any access by the offline cpu and therefore * synchronization is simplified. */ @@ -904,8 +904,7 @@ void cpu_vm_stats_fold(int cpu) if (pzstats->vm_stat_diff[i]) { int v; - v = pzstats->vm_stat_diff[i]; - pzstats->vm_stat_diff[i] = 0; + v = xchg(&pzstats->vm_stat_diff[i], 0); atomic_long_add(v, &zone->vm_stat[i]); global_zone_diff[i] += v; } @@ -915,8 +914,7 @@ void cpu_vm_stats_fold(int cpu) if (pzstats->vm_numa_event[i]) { unsigned long v; - v = pzstats->vm_numa_event[i]; - pzstats->vm_numa_event[i] = 0; + v = xchg(&pzstats->vm_numa_event[i], 0); zone_numa_event_add(v, zone, i); } } @@ -932,8 +930,7 @@ void cpu_vm_stats_fold(int cpu) if (p->vm_node_stat_diff[i]) { int v; - v = p->vm_node_stat_diff[i]; - p->vm_node_stat_diff[i] = 0; + v = xchg(&p->vm_node_stat_diff[i], 0); atomic_long_add(v, &pgdat->vm_stat[i]); global_node_diff[i] += v; } _ Patches currently in -mm which might be from mtosatti@xxxxxxxxxx are mm-vmstat-remove-remote-node-draining.patch this_cpu_cmpxchg-arm64-switch-this_cpu_cmpxchg-to-locked-add-_local-function.patch this_cpu_cmpxchg-loongarch-switch-this_cpu_cmpxchg-to-locked-add-_local-function.patch this_cpu_cmpxchg-s390-switch-this_cpu_cmpxchg-to-locked-add-_local-function.patch this_cpu_cmpxchg-x86-switch-this_cpu_cmpxchg-to-locked-add-_local-function.patch add-this_cpu_cmpxchg_local-and-asm-generic-definitions.patch convert-this_cpu_cmpxchg-users-to-this_cpu_cmpxchg_local.patch mm-vmstat-switch-counter-modification-to-cmpxchg.patch mm-vmstat-use-xchg-in-cpu_vm_stats_fold.patch mm-vmstat-switch-vmstat-shepherd-to-flush-per-cpu-counters-remotely.patch mm-vmstat-refresh-stats-remotely-instead-of-via-work-item.patch