The patch titled Subject: include/linux/percpu_counter.h: race in uniprocessor percpu_counter_add() has been added to the -mm mm-nonmm-unstable branch. Its filename is include-linux-percpu_counterh-race-in-uniprocessor-percpu_counter_add.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/include-linux-percpu_counterh-race-in-uniprocessor-percpu_counter_add.patch This patch will later appear in the mm-nonmm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx> Subject: include/linux/percpu_counter.h: race in uniprocessor percpu_counter_add() Date: Fri, 16 Dec 2022 16:04:40 +0100 The percpu interface is supposed to be preempt and irq safe. But: The uniprocessor implementation of percpu_counter_add() is not irq safe: if an interrupt happens during the +=, then the result is undefined. Therefore: switch from preempt_disable() to local_irq_save(). This prevents interrupts from interrupting the +=, and as a side effect prevents preemption. Link: https://lkml.kernel.org/r/20221216150441.200533-2-manfred@xxxxxxxxxxxxxxxx Signed-off-by: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx> Cc: "Sun, Jiebin" <jiebin.sun@xxxxxxxxx> Cc: <1vier1@xxxxxx> Cc: Alexander Sverdlin <alexander.sverdlin@xxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/percpu_counter.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) --- a/include/linux/percpu_counter.h~include-linux-percpu_counterh-race-in-uniprocessor-percpu_counter_add +++ a/include/linux/percpu_counter.h @@ -152,9 +152,11 @@ __percpu_counter_compare(struct percpu_c static inline void percpu_counter_add(struct percpu_counter *fbc, s64 amount) { - preempt_disable(); + unsigned long flags; + + local_irq_save(flags); fbc->count += amount; - preempt_enable(); + local_irq_restore(flags); } /* non-SMP percpu_counter_add_local is the same with percpu_counter_add */ _ Patches currently in -mm which might be from manfred@xxxxxxxxxxxxxxxx are lib-percpu_counter-percpu_counter_add_batch-overflow-underflow.patch include-linux-percpu_counterh-race-in-uniprocessor-percpu_counter_add.patch kernel-irq-managec-disable_irq-might-sleep.patch