On 6/18/22 01:45, Guo Ren wrote:
I see that the qspinlock() code actually calls a 'relaxed' version of xchg16(),
but you only implement the one with the full barrier. Is it possible to
directly provide a relaxed version that has something less than the
__WEAK_LLSC_MB?
I am also curious that __WEAK_LLSC_MB is very magic. How does it
prevent preceded accesses from happening after sc for a strong
cmpxchg?
#define __cmpxchg_asm(ld, st, m, old, new) \
({ \
__typeof(old) __ret; \
\
__asm__ __volatile__( \
"1: " ld " %0, %2 # __cmpxchg_asm \n" \
" bne %0, %z3, 2f \n" \
" or $t0, %z4, $zero \n" \
" " st " $t0, %1 \n" \
" beq $zero, $t0, 1b \n" \
"2: \n" \
__WEAK_LLSC_MB \
And its __smp_mb__xxx are just defined as a compiler barrier()?
#define __smp_mb__before_atomic() barrier()
#define __smp_mb__after_atomic() barrier()
I know this one. There is only one type of barrier defined in the v1.00
of LoongArch, that is the full barrier, but this is going to change.
Huacai hinted in the bringup patchset that 3A6000 and later models would
have finer-grained barriers. So these indeed could be relaxed in the
future, just that Huacai has to wait for their embargo to expire.