On Fri, Jan 27, 2023 at 02:49:46PM +0100, Jules Maselbas wrote: > Hi Peter, > > On Fri, Jan 27, 2023 at 12:18:13PM +0100, Peter Zijlstra wrote: > > On Thu, Jan 26, 2023 at 06:33:54PM +0100, Jules Maselbas wrote: > > > > > @@ -58,9 +61,11 @@ static inline int generic_atomic_fetch_##op(int i, atomic_t *v) \ > > > static inline void generic_atomic_##op(int i, atomic_t *v) \ > > > { \ > > > unsigned long flags; \ > > > + int c; \ > > > \ > > > raw_local_irq_save(flags); \ > > > - v->counter = v->counter c_op i; \ > > > + c = arch_atomic_read(v); \ > > > + arch_atomic_set(v, c c_op i); \ > > > raw_local_irq_restore(flags); \ > > > } > > > > This and the others like it are a bit sad, it explicitly dis-allows the > > compiler from using memops and forces a load-store. > Good point, I don't know much about atomic memops but this is indeed a > bit sad to prevent such instructions to be used. Depends on the platform, x86,s390 etc. have then, RISC like things typically don't. > > The alternative is writing it like: > > > > *(volatile int *)&v->counter c_op i; > I wonder if it could be possible to write something like: > > *(volatile int *)&v->counter += i; Should work, but give it a try, see what it does :-) > I also noticed that GCC has some builtin/extension to do such things, > __atomic_OP_fetch and __atomic_fetch_OP, but I do not know if this > can be used in the kernel. On a per-architecture basis only, the C/C++ memory model does not match the Linux Kernel memory model so using the compiler to generate the atomic ops is somewhat tricky and needs architecture audits.