Markos Chandras <Markos.Chandras@xxxxxxxxxx> writes: > On 01/20/2015 01:04 AM, Maciej W. Rozycki wrote: > > On Fri, 16 Jan 2015, Markos Chandras wrote: > > > >> sub $reg, imm is not a real MIPS instruction. The assembler replaces > >> that with 'addi $reg, -imm'. However, addi has been removed from R6, > >> so we replace the 'sub' instruction with 'addiu' instead. > >> > >> Signed-off-by: Markos Chandras <markos.chandras@xxxxxxxxxx> > >> --- > >> arch/mips/include/asm/spinlock.h | 2 +- > >> 1 file changed, 1 insertion(+), 1 deletion(-) > >> > >> diff --git a/arch/mips/include/asm/spinlock.h > b/arch/mips/include/asm/spinlock.h > >> index c6d06d383ef9..500050d3bda6 100644 > >> --- a/arch/mips/include/asm/spinlock.h > >> +++ b/arch/mips/include/asm/spinlock.h > >> @@ -276,7 +276,7 @@ static inline void arch_read_unlock(arch_rwlock_t > *rw) > >> do { > >> __asm__ __volatile__( > >> "1: ll %1, %2 # arch_read_unlock \n" > >> - " sub %1, 1 \n" > >> + " addiu %1, -1 \n" > >> " sc %1, %0 \n" > >> : "=" GCC_OFF12_ASM() (rw->lock), "=&r" (tmp) > >> : GCC_OFF12_ASM() (rw->lock) > > > > This integer overflow trap is deliberate here -- have you seen the note > > just above: > > > > /* Note the use of sub, not subu which will make the kernel die with an > > overflow exception if we ever try to unlock an rwlock that is already > > unlocked or is being held by a writer. */ > > > > ? > > > > What this shows really is a GAS bug fix for the SUB macro is needed > > similar to what I suggested in 12/70 for ADDI (from the situation I > infer > > there is some real work to do in GAS in this area; adding Matthew as a > > recipient to raise his awareness) so that it does not expand to ADDI > where > > the architecture or processor selected do not support it. Instead a > > longer sequence involving SUB has to be produced. The assembler is at least consistent at the moment as the 'sub' macro is disabled for R6. I am very keen to stop carrying around historic baggage where it is not necessary. R6 is one place we can do that and deal with any code changes that are required. > > However, regardless, I suggest code like: > > > > /* There's no R6 ADDI instruction so use the ADD register version > instead. */ > > #ifdef CONFIG_CPU_MIPSR6 > > #define GCC_ADDI_ASM() "r" > > #else > > #define GCC_ADDI_ASM() "I" > > #endif > > > > __asm__ __volatile__( > > "1: ll %1, %2 # arch_read_unlock \n" > > " sub %1, %3 \n" > > " sc %1, %0 \n" > > : "=" GCC_OFF12_ASM() (rw->lock), "=&r" (tmp) > > : GCC_OFF12_ASM() (rw->lock), GCC_ADDI_ASM() (1) > > : "memory"); > > > > (untested, but should work) so that there's still a single instruction > > only in the LL/SC loop and consequently no increased lock contention > risk. I'm afraid I don't like this. The obfuscation of the underlying ISA is very confusing. In this case it is 100% clear that the asm instruction is going to get an immediate operand so the instruction is 'addi' with a negated immediate. However, it costs nothing (in the loop) to use 'add' and an 'r' constraint in all cases. Yes, there will be an extra setup instruction and register used but the assembly code is clear and maps 1:1 to real instructions for all ISAs. (Note this asm block does not appear to need to clobber memory either as the effects on memory are correctly stated in the constraints). > > As a side note, this could be cleaned up to use a "+" input/output > > constraint; such a clean-up will be welcome -- although to be complete, > a > > review of all the asms will be required (this may bump up the GCC > version > > requirement though, ISTR bugs in this area). I believe some of these asm blocks using ll/sc already have '+' in the constraints for the memory location so perhaps that is either already a problem or not an issue. Matthew