On Fri, Jun 17, 2022 at 7:45 PM Guo Ren <guoren@xxxxxxxxxx> wrote: > On Sat, Jun 18, 2022 at 12:11 AM Arnd Bergmann <arnd@xxxxxxxx> wrote: > > >+ > > > > Do you actually need the size 1 as well? > > > > Generally speaking, I would like to rework the xchg()/cmpxchg() logic > > to only cover the 32-bit and word-sized (possibly 64-bit) case, while > > having separate optional 8-bit and 16-bit functions. I had a patch for > Why not prevent 8-bit and 16-bit xchg()/cmpxchg() directly? eg: move > qspinlock xchg_tail to per arch_xchg_tail. > That means Linux doesn't provide a mixed-size atomic operation primitive. > > What does your "separate optional 8-bit and 16-bit functions" mean here? What I have in mind is something like static inline u8 arch_xchg8(u8 *ptr, u8 x) {...} static inline u16 arch_xchg16(u16 *ptr, u16 x) {...} static inline u32 arch_xchg32(u32 *ptr, u32 x) {...} static inline u64 arch_xchg64(u64 *ptr, u64 x) {...} #ifdef CONFIG_64BIT #define xchg(ptr, x) (sizeof(*ptr) == 8) ? \ arch_xchg64((u64*)ptr, (uintptr_t)x) \ arch_xchg32((u32*)ptr, x) #else #define xchg(ptr, x) arch_xchg32((u32*)ptr, (uintptr_t)x) #endif This means most of the helpers can actually be normal inline functions, and only 64-bit architectures need the special case of dealing with non-u32-sized pointers and 'long' values. Arnd