The quilt patch titled Subject: include/linux/percpu_counter.h: race in uniprocessor percpu_counter_add() has been removed from the -mm tree. Its filename was include-linux-percpu_counterh-race-in-uniprocessor-percpu_counter_add.patch This patch was dropped because it was merged into the mm-nonmm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx> Subject: include/linux/percpu_counter.h: race in uniprocessor percpu_counter_add() Date: Fri, 16 Dec 2022 16:04:40 +0100 The percpu interface is supposed to be preempt and irq safe. But: The uniprocessor implementation of percpu_counter_add() is not irq safe: if an interrupt happens during the +=, then the result is undefined. Therefore: switch from preempt_disable() to local_irq_save(). This prevents interrupts from interrupting the +=, and as a side effect prevents preemption. Link: https://lkml.kernel.org/r/20221216150441.200533-2-manfred@xxxxxxxxxxxxxxxx Signed-off-by: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx> Cc: "Sun, Jiebin" <jiebin.sun@xxxxxxxxx> Cc: <1vier1@xxxxxx> Cc: Alexander Sverdlin <alexander.sverdlin@xxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- --- a/include/linux/percpu_counter.h~include-linux-percpu_counterh-race-in-uniprocessor-percpu_counter_add +++ a/include/linux/percpu_counter.h @@ -152,9 +152,11 @@ __percpu_counter_compare(struct percpu_c static inline void percpu_counter_add(struct percpu_counter *fbc, s64 amount) { - preempt_disable(); + unsigned long flags; + + local_irq_save(flags); fbc->count += amount; - preempt_enable(); + local_irq_restore(flags); } /* non-SMP percpu_counter_add_local is the same with percpu_counter_add */ _ Patches currently in -mm which might be from manfred@xxxxxxxxxxxxxxxx are