Re: [PATCH] LoongArch: Add qspinlock support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Jun 19, 2022 at 12:28 PM hev <r@xxxxxx> wrote:
>
> Hello,
>
> On Sat, Jun 18, 2022 at 8:59 PM WANG Xuerui <kernel@xxxxxxxxxx> wrote:
> >
> > On 6/18/22 01:45, Guo Ren wrote:
> > >
> > >> I see that the qspinlock() code actually calls a 'relaxed' version of xchg16(),
> > >> but you only implement the one with the full barrier. Is it possible to
> > >> directly provide a relaxed version that has something less than the
> > >> __WEAK_LLSC_MB?
> > > I am also curious that __WEAK_LLSC_MB is very magic. How does it
> > > prevent preceded accesses from happening after sc for a strong
> > > cmpxchg?
> > >
> > > #define __cmpxchg_asm(ld, st, m, old, new)                              \
> > > ({                                                                      \
> > >          __typeof(old) __ret;                                            \
> > >                                                                          \
> > >          __asm__ __volatile__(                                           \
> > >          "1:     " ld "  %0, %2          # __cmpxchg_asm \n"             \
> > >          "       bne     %0, %z3, 2f                     \n"             \
> > >          "       or      $t0, %z4, $zero                 \n"             \
> > >          "       " st "  $t0, %1                         \n"             \
> > >          "       beq     $zero, $t0, 1b                  \n"             \
> > >          "2:                                             \n"             \
> > >          __WEAK_LLSC_MB                                                  \
> > >
> > > And its __smp_mb__xxx are just defined as a compiler barrier()?
> > > #define __smp_mb__before_atomic()       barrier()
> > > #define __smp_mb__after_atomic()        barrier()
> > I know this one. There is only one type of barrier defined in the v1.00
> > of LoongArch, that is the full barrier, but this is going to change.
> > Huacai hinted in the bringup patchset that 3A6000 and later models would
> > have finer-grained barriers. So these indeed could be relaxed in the
> > future, just that Huacai has to wait for their embargo to expire.
> >
>
> IIRC, The Loongson LL/SC behaves differently than others:
>
> Loongson:
> LL: Full barrier + Load exclusive
> SC: Store conditional + Full barrier
How about your "am"#asm_op"_db."?

Full barrier + AMO + Full barrier ?

>
> Others:
> LL: Load exclusive + Acquire barrier
> SC: Release barrier + Store conditional
>
> So we just need to prevent compiler reorder before/after atomic.
> And this is why we need __WEAK_LLSC_MB to prevent runtime reorder for
> loads after LL.
>
> hev



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/



[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux