Adjust the atomic loop in the MIPS_ATOMIC_SET operation of the sysmips system call to branch straight back to the linked load rather than jumping via a different subsection (whose purpose remains a mystery to me). Signed-off-by: James Hogan <james.hogan@xxxxxxxxxx> Cc: Ralf Baechle <ralf@xxxxxxxxxxxxxx> Cc: linux-mips@xxxxxxxxxxxxxx --- arch/mips/kernel/syscall.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/arch/mips/kernel/syscall.c b/arch/mips/kernel/syscall.c index ca54ac40252b..6c6bf43d681b 100644 --- a/arch/mips/kernel/syscall.c +++ b/arch/mips/kernel/syscall.c @@ -137,13 +137,9 @@ static inline int mips_atomic_set(unsigned long addr, unsigned long new) " move %[tmp], %[new] \n" "2: \n" user_sc("%[tmp]", "(%[addr])") - " beqz %[tmp], 4f \n" + " beqz %[tmp], 1b \n" "3: \n" " .insn \n" - " .subsection 2 \n" - "4: b 1b \n" - " .previous \n" - " \n" " .section .fixup,\"ax\" \n" "5: li %[err], %[efault] \n" " j 3b \n" -- git-series 0.8.10