The patch below does not apply to the 5.4-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to <stable@xxxxxxxxxxxxxxx>. Thanks, Sasha ------------------ original commit in Linus's tree ------------------ >From c87e12e0e8c1241410e758e181ca6bf23efa5b5b Mon Sep 17 00:00:00 2001 From: Huacai Chen <chenhuacai@xxxxxxxxxxx> Date: Tue, 19 Mar 2024 15:50:34 +0800 Subject: [PATCH] LoongArch: Change __my_cpu_offset definition to avoid mis-optimization >From GCC commit 3f13154553f8546a ("df-scan: remove ad-hoc handling of global regs in asms"), global registers will no longer be forced to add to the def-use chain. Then current_thread_info(), current_stack_pointer and __my_cpu_offset may be lifted out of the loop because they are no longer treated as "volatile variables". This optimization is still correct for the current_thread_info() and current_stack_pointer usages because they are associated to a thread. However it is wrong for __my_cpu_offset because it is associated to a CPU rather than a thread: if the thread migrates to a different CPU in the loop, __my_cpu_offset should be changed. Change __my_cpu_offset definition to treat it as a "volatile variable", in order to avoid such a mis-optimization. Cc: stable@xxxxxxxxxxxxxxx Reported-by: Xiaotian Wu <wuxiaotian@xxxxxxxxxxx> Reported-by: Miao Wang <shankerwangmiao@xxxxxxxxx> Signed-off-by: Xing Li <lixing@xxxxxxxxxxx> Signed-off-by: Hongchen Zhang <zhanghongchen@xxxxxxxxxxx> Signed-off-by: Rui Wang <wangrui@xxxxxxxxxxx> Signed-off-by: Huacai Chen <chenhuacai@xxxxxxxxxxx> --- arch/loongarch/include/asm/percpu.h | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/loongarch/include/asm/percpu.h b/arch/loongarch/include/asm/percpu.h index 9b36ac003f890..8f290e5546cf7 100644 --- a/arch/loongarch/include/asm/percpu.h +++ b/arch/loongarch/include/asm/percpu.h @@ -29,7 +29,12 @@ static inline void set_my_cpu_offset(unsigned long off) __my_cpu_offset = off; csr_write64(off, PERCPU_BASE_KS); } -#define __my_cpu_offset __my_cpu_offset + +#define __my_cpu_offset \ +({ \ + __asm__ __volatile__("":"+r"(__my_cpu_offset)); \ + __my_cpu_offset; \ +}) #define PERCPU_OP(op, asm_op, c_op) \ static __always_inline unsigned long __percpu_##op(void *ptr, \ -- 2.43.0