Hi David, On Wed, May 31, 2017 at 09:28:36AM -0700, David Daney wrote: > On 05/31/2017 08:19 AM, James Hogan wrote: > > Adjust the atomic loop in the MIPS_ATOMIC_SET operation of the sysmips > > system call to branch straight back to the linked load rather than > > jumping via a different subsection (whose purpose remains a mystery to > > me). > > The subsection keeps the code for the (hopefully) cold path out of line > which should result in a smaller cache footprint in the hot path. Hmm, yes that would make sense if it did something useful there, but it just immediately jumps back to the ll. Cheers James > > > > > > Signed-off-by: James Hogan <james.hogan@xxxxxxxxxx> > > Cc: Ralf Baechle <ralf@xxxxxxxxxxxxxx> > > Cc: linux-mips@xxxxxxxxxxxxxx > > --- > > arch/mips/kernel/syscall.c | 6 +----- > > 1 file changed, 1 insertion(+), 5 deletions(-) > > > > diff --git a/arch/mips/kernel/syscall.c b/arch/mips/kernel/syscall.c > > index ca54ac40252b..6c6bf43d681b 100644 > > --- a/arch/mips/kernel/syscall.c > > +++ b/arch/mips/kernel/syscall.c > > @@ -137,13 +137,9 @@ static inline int mips_atomic_set(unsigned long addr, unsigned long new) > > " move %[tmp], %[new] \n" > > "2: \n" > > user_sc("%[tmp]", "(%[addr])") > > - " beqz %[tmp], 4f \n" > > + " beqz %[tmp], 1b \n" > > "3: \n" > > " .insn \n" > > - " .subsection 2 \n" > > - "4: b 1b \n" > > - " .previous \n" > > - " \n" > > " .section .fixup,\"ax\" \n" > > "5: li %[err], %[efault] \n" > > " j 3b \n" > > >
Attachment:
signature.asc
Description: Digital signature