Hi Huacai, On Sat, Jun 24, 2017 at 05:23:52PM +0800, Huacai Chen wrote: > You are right, but it seems like __WEAK_LLSC_MB is already the best > name for this case. Maybe I define a macro named __VERY_WEAK_LLSC_MB > to expand a "sync" on Loongson? I suppose so. Can you clarify what very weak ordering means in this context? I.e. in what case is it insufficient to have the sync before the label rather than before every ll in the retry loop? Cheers James > > Huacai > > On Sat, Jun 24, 2017 at 5:02 PM, James Hogan <james.hogan@xxxxxxxxxx> wrote: > > On 24 June 2017 09:55:14 BST, Huacai Chen <chenhc@xxxxxxxxxx> wrote: > >>Hi, James, > >> > >>smp_mb__before_llsc() can not be used in all cases, e.g., in > >>arch/mips/include/asm/spinlock.h and other similar cases which has a > >>label before ll/lld. So, I think it is better to keep it as is to keep > >>consistency. > > > > I know. I didn't mean use smp_mb_before_llsc directly, i just meant use something similar directly before the ll that would expand to nothing on non loongson kernels and still avoid the mass duplication of inline asm which leads to divergence, bitrot, and maintenance problems. > > > > cheers > > James > > > >> > >>Huacai > >> > >>On Fri, Jun 23, 2017 at 10:54 PM, James Hogan <james.hogan@xxxxxxxxxx> > >>wrote: > >>> On Thu, Jun 22, 2017 at 11:06:56PM +0800, Huacai Chen wrote: > >>>> diff --git a/arch/mips/include/asm/atomic.h > >>b/arch/mips/include/asm/atomic.h > >>>> index 0ab176b..e0002c58 100644 > >>>> --- a/arch/mips/include/asm/atomic.h > >>>> +++ b/arch/mips/include/asm/atomic.h > >>>> @@ -56,6 +56,22 @@ static __inline__ void atomic_##op(int i, > >>atomic_t * v) \ > >>>> " .set mips0 > >> \n" \ > >>>> : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (v->counter) > >> \ > >>>> : "Ir" (i)); > >> \ > >>>> + } else if (kernel_uses_llsc && LOONGSON_LLSC_WAR) { > >> \ > >>>> + int temp; > >> \ > >>>> + > >> \ > >>>> + do { > >> \ > >>>> + __asm__ __volatile__( > >> \ > >>>> + " .set "MIPS_ISA_LEVEL" > >> \n" \ > >>>> + __WEAK_LLSC_MB > >> \ > >>>> + " ll %0, %1 # atomic_" #op > >>"\n" \ > >>>> + " " #asm_op " %0, %2 > >> \n" \ > >>>> + " sc %0, %1 > >> \n" \ > >>>> + " .set mips0 > >> \n" \ > >>>> + : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() > >>(v->counter) \ > >>>> + : "Ir" (i)); > >> \ > >>>> + } while (unlikely(!temp)); > >> \ > >>> > >>> Can loongson use the common versions of all these bits of assembly by > >>> adding a LOONGSON_LLSC_WAR dependent smp_mb__before_llsc()-like macro > >>> before the asm? > >>> > >>> It would save a lot of duplication, avoid potential bitrot and > >>> divergence, and make the patch much easier to review. > >>> > >>> Cheers > >>> James > > > > > > -- > > James Hogan > >
Attachment:
signature.asc
Description: Digital signature