On 2020-07-19 12:35 p.m., Helge Deller wrote: >> In reviewing the atomic operations in entry.S, I realized that there is also a bug in the >> spin lock release code of the TLB handler. Space id's are 64 bits on 64-bit targets. So, >> using the least significant 32 bits to reset the spin lock is not safe. The lock will not >> be freed if the bits are all zero. > Hmm.. > The space ids on 64-bit Linux are limited to (see arch/parisc/mm/init.c): > #define NR_SPACE_IDS 262144 > and SID == 0 can't happen for userspace (it's blocked in the space_id[] bitmap). > So, I think this part was ok. Okay, then the change to store 1 was unnecessary. > >> @@ -467,10 +466,9 @@ >> /* Release pa_tlb_lock lock without reloading lock address. */ >> .macro tlb_unlock0 spc,tmp,tmp1 >> #ifdef CONFIG_SMP >> + ldi 1,\tmp1 >> 98: or,COND(=) %r0,\spc,%r0 >> - LDCW 0(\tmp),\tmp1 >> - or,COND(=) %r0,\spc,%r0 >> - stw \spc,0(\tmp) >> + stw \tmp1,0(\tmp) >> 99: ALTERNATIVE(98b, 99b, ALT_COND_NO_SMP, INSN_NOP) > In tlb_lock() we only lock for non-kernel SIDs (!=0), > but now you unlock unconditionally. No, there's still an "or" instruction to nullify the store used to release the lock. I will go back to using the \spc register as this is time critical code. I will add a note regarding the number of space id's. Dave -- John David Anglin dave.anglin@xxxxxxxx