Re: [PATCH] LoongArch: Add qspinlock support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Mon, Jun 20, 2022 at 12:11 AM Arnd Bergmann <arnd@xxxxxxxx> wrote:
>
> On Sun, Jun 19, 2022 at 5:48 PM Guo Ren <guoren@xxxxxxxxxx> wrote:
> >
> > On Sat, Jun 18, 2022 at 1:40 PM Arnd Bergmann <arnd@xxxxxxxx> wrote:
> > >
> > > On Sat, Jun 18, 2022 at 1:19 AM Guo Ren <guoren@xxxxxxxxxx> wrote:
> > > >
> > > > > static inline u32 arch_xchg32(u32 *ptr, u32 x) {...}
> > > > > static inline u64 arch_xchg64(u64 *ptr, u64 x) {...}
> > > > >
> > > > > #ifdef CONFIG_64BIT
> > > > > #define xchg(ptr, x) (sizeof(*ptr) == 8) ? \
> > > > >             arch_xchg64((u64*)ptr, (uintptr_t)x)  \
> > > > >             arch_xchg32((u32*)ptr, x)
> > > > > #else
> > > > > #define xchg(ptr, x) arch_xchg32((u32*)ptr, (uintptr_t)x)
> > > > > #endif
> > > >
> > > > The above primitive implies only long & int type args are permitted, right?
> > >
> > > The idea is to allow any scalar or pointer type, but not structures or
> > > unions. If we need to deal with those as well, the macro could be extended
> > > accordingly, but I would prefer to limit it as much as possible.
> > >
> > > There is already cmpxchg64(), which is used for types that are fixed to
> > > 64 bit integers even on 32-bit architectures, but it is rarely used except
> > > to implement the atomic64_t helpers.
> > A lot of 32bit arches couldn't provide cmpxchg64 (like arm's ldrexd/strexd).
>
> Most 32-bit architectures also lack SMP support, so they can fall back to
> the generic version from include/asm-generic/cmpxchg-local.h
>
> > Another question: Do you know why arm32 didn't implement
> > HAVE_CMPXCHG_DOUBLE with ldrexd/strexd?
>
> I think it's just fairly obscure, the slub code appears to be the only
> code that would use it.
>
> > >
> > > 80% of the uses of cmpxchg() and xchg() deal with word-sized
> > > quantities like 'unsigned long', or 'void *', but the others are almost
> > > all fixed 32-bit quantities. We could change those to use cmpxchg32()
> > > directly and simplify the cmpxchg() function further to only deal
> > > with word-sized arguments, but I would not do that in the first step.
> > Don't forget cmpxchg_double for this cleanup, when do you want to
> > restart the work?
>
> I have no specific plans at the moment. If you or someone else likes
> to look into it, I can dig out my old patch though.
>
> The cmpxchg_double() call seems to already fit in, since it is an
> inline function and does not expect arbitrary argument types.
Thank all of you. :)

As Rui and Xuerui said, ll and sc in LoongArch both have implicit full
barriers, so there is no "relaxed" version.

The __WEAK_LLSC_MB in __cmpxchg_small() have nothing to do with ll and
 sc themselves, we need a barrier at the branch target just because
Loongson-3A5000 has a hardware flaw (and will be fixed in
Loongson-3A6000).

qspinlock just needs xchg_small(), but cmpxchg_small() is also useful
for percpu operations. So I plan to split this patch to two: the first
add xchg_small() and cmpxchg_small(), the second enable qspinlock.

Huacai

>
>        Arnd



[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux