On 2025-03-14 10:02:47 [-0700], Shakeel Butt wrote: > > > > on arm64, __this_cpu_add will "load, add, store". preemptible. > > this_cpu_add() will "disable preemption, atomic-load, add, atomic-store or > > start over with atomic-load. if succeeded enable preemption and move an" > > So, this_cpu_add() on arm64 is not protected against interrupts but is > protected against preemption. We have a following comment in > include/linux/percpu-defs.h. Is this not true anymore? It performs an atomic update. So it loads exclusive from memory and then stores conditionally if the exclusive monitor did not observe another load on this address. Disabling preemption is only done to ensure that the operation happens on the local-CPU and task gets not moved another CPU during the operation. The concurrent update to the same memory address from an interrupt will be caught by the exclusive monitor. The reason to remain on the same CPU is probably to ensure that __this_cpu_add() in an IRQ-off region does not clash with an atomic update performed elsewhere. While looking at it, there is also the LSE extension which results in a single add _if_ atomic. Sebastian