EVA linked loads (LLE) and conditional stores (SCE) should be used on EVA kernels for the MIPS_ATOMIC_SET operation of the sysmips system call, or else the atomic set will apply to the kernel view of the virtual address space (potentially unmapped on EVA kernels) rather than the user view (TLB mapped). Signed-off-by: James Hogan <james.hogan@xxxxxxxxxx> Cc: Ralf Baechle <ralf@xxxxxxxxxxxxxx> Cc: linux-mips@xxxxxxxxxxxxxx Cc: <stable@xxxxxxxxxxxxxxx> # 3.15.x- --- arch/mips/kernel/syscall.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/mips/kernel/syscall.c b/arch/mips/kernel/syscall.c index 3971220ea925..ca54ac40252b 100644 --- a/arch/mips/kernel/syscall.c +++ b/arch/mips/kernel/syscall.c @@ -29,6 +29,7 @@ #include <linux/sched/task_stack.h> #include <asm/asm.h> +#include <asm/asm-eva.h> #include <asm/branch.h> #include <asm/cachectl.h> #include <asm/cacheflush.h> @@ -131,9 +132,11 @@ static inline int mips_atomic_set(unsigned long addr, unsigned long new) __asm__ __volatile__ ( " .set "MIPS_ISA_ARCH_LEVEL" \n" " li %[err], 0 \n" - "1: ll %[old], (%[addr]) \n" + "1: \n" + user_ll("%[old]", "(%[addr])") " move %[tmp], %[new] \n" - "2: sc %[tmp], (%[addr]) \n" + "2: \n" + user_sc("%[tmp]", "(%[addr])") " beqz %[tmp], 4f \n" "3: \n" " .insn \n" -- git-series 0.8.10