On Fri, Dec 30, 2022 at 03:15:52PM +0100, Andrzej Hajda wrote:
__xchg will be used for non-atomic xchg macro. Signed-off-by: Andrzej Hajda <andrzej.hajda@xxxxxxxxx> Reviewed-by: Arnd Bergmann <arnd@xxxxxxxx> --- v2: squashed all arch patches into one v3: fixed alpha/xchg_local, thx to lkp@xxxxxxxxx ---
...
arch/s390/include/asm/cmpxchg.h | 4 ++-- diff --git a/arch/s390/include/asm/cmpxchg.h b/arch/s390/include/asm/cmpxchg.h index 84c3f0d576c5b1..efc16f4aac8643 100644 --- a/arch/s390/include/asm/cmpxchg.h +++ b/arch/s390/include/asm/cmpxchg.h @@ -14,7 +14,7 @@ void __xchg_called_with_bad_pointer(void); -static __always_inline unsigned long __xchg(unsigned long x, +static __always_inline unsigned long __arch_xchg(unsigned long x, unsigned long address, int size)
Please adjust the alignment of the second line.
@@ -77,7 +77,7 @@ static __always_inline unsigned long __xchg(unsigned long x, __typeof__(*(ptr)) __ret; \ \ __ret = (__typeof__(*(ptr))) \ - __xchg((unsigned long)(x), (unsigned long)(ptr), \ + __arch_xchg((unsigned long)(x), (unsigned long)(ptr), \ sizeof(*(ptr))); \
Same here. The same is true for a couple of other architectures - not sure if they care however.