Re: [PATCH v2 10/10] riscv: Add qspinlock support based on Zabha extension

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Guo,

On Mon, Jul 8, 2024 at 1:51 PM Guo Ren <guoren@xxxxxxxxxx> wrote:
>
> On Sun, Jul 7, 2024 at 10:20 AM Guo Ren <guoren@xxxxxxxxxx> wrote:
> >
> > On Wed, Jun 26, 2024 at 9:14 PM Alexandre Ghiti <alexghiti@xxxxxxxxxxxx> wrote:
> > >
> > > In order to produce a generic kernel, a user can select
> > > CONFIG_COMBO_SPINLOCKS which will fallback at runtime to the ticket
> > > spinlock implementation if Zabha is not present.
> > >
> > > Note that we can't use alternatives here because the discovery of
> > > extensions is done too late and we need to start with the qspinlock
> > That's not true; we treat spinlock as qspinlock at first.
> > qspinlock_unlock would make the lock value zero (clean), but
> > ticket_lock would make a dirty one. (I've spent much time on this
> > mechanism, and you've preserved it in this patch.) So, making the
> > qspinlock -> ticket_lock change point safe until sched_init() is late
> > enough to make alternatives. The key problem of alternative
> > implementation is tough coding because you can't reuse the C code. The
> > whole ticket_lock must be rewritten in asm and include the qspinlock
> > fast path.
> >
> > I think we should discuss some points before continuing the patch:
> > 1. Using alternative mechanisms for combo spinlock
> > 2. Using three Kconfigs for
> > ticket_spinlock/queune_spinlock/combo_spinlock during compile stage.
> > 3. The forward progress guarantee requirement is written in
> > qspinlock.h comment. That is not about our CAS/BHA.
> >
> > > implementation because the ticket spinlock implementation would pollute
> > > the spinlock value, so let's use static keys.
> > >
> > > This is largely based on Guo's work and Leonardo reviews at [1].
> > >
> > > Link: https://lore.kernel.org/linux-riscv/20231225125847.2778638-1-guoren@xxxxxxxxxx/ [1]
> > > Signed-off-by: Alexandre Ghiti <alexghiti@xxxxxxxxxxxx>
> > > ---
> > >  .../locking/queued-spinlocks/arch-support.txt |  2 +-
> > >  arch/riscv/Kconfig                            | 10 +++++
> > >  arch/riscv/include/asm/Kbuild                 |  4 +-
> > >  arch/riscv/include/asm/spinlock.h             | 39 +++++++++++++++++++
> > >  arch/riscv/kernel/setup.c                     | 21 ++++++++++
> > >  include/asm-generic/qspinlock.h               |  2 +
> > >  include/asm-generic/ticket_spinlock.h         |  2 +
> > >  7 files changed, 78 insertions(+), 2 deletions(-)
> > >  create mode 100644 arch/riscv/include/asm/spinlock.h
> > >
> > > diff --git a/Documentation/features/locking/queued-spinlocks/arch-support.txt b/Documentation/features/locking/queued-spinlocks/arch-support.txt
> > > index 22f2990392ff..cf26042480e2 100644
> > > --- a/Documentation/features/locking/queued-spinlocks/arch-support.txt
> > > +++ b/Documentation/features/locking/queued-spinlocks/arch-support.txt
> > > @@ -20,7 +20,7 @@
> > >      |    openrisc: |  ok  |
> > >      |      parisc: | TODO |
> > >      |     powerpc: |  ok  |
> > > -    |       riscv: | TODO |
> > > +    |       riscv: |  ok  |
> > >      |        s390: | TODO |
> > >      |          sh: | TODO |
> > >      |       sparc: |  ok  |
> > > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> > > index 0bbaec0444d0..c2ba673e58ca 100644
> > > --- a/arch/riscv/Kconfig
> > > +++ b/arch/riscv/Kconfig
> > > @@ -72,6 +72,7 @@ config RISCV
> > >         select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP
> > >         select ARCH_WANTS_NO_INSTR
> > >         select ARCH_WANTS_THP_SWAP if HAVE_ARCH_TRANSPARENT_HUGEPAGE
> > > +       select ARCH_WEAK_RELEASE_ACQUIRE if ARCH_USE_QUEUED_SPINLOCKS
> > >         select BINFMT_FLAT_NO_DATA_START_OFFSET if !MMU
> > >         select BUILDTIME_TABLE_SORT if MMU
> > >         select CLINT_TIMER if RISCV_M_MODE
> > > @@ -482,6 +483,15 @@ config NODES_SHIFT
> > >           Specify the maximum number of NUMA Nodes available on the target
> > >           system.  Increases memory reserved to accommodate various tables.
> > >
> > > +config RISCV_COMBO_SPINLOCKS
> > > +       bool "Using combo spinlock"
> > > +       depends on SMP && MMU && TOOLCHAIN_HAS_ZABHA
> > > +       select ARCH_USE_QUEUED_SPINLOCKS
> > > +       default y
> > > +       help
> > > +         Embed both queued spinlock and ticket lock so that the spinlock
> > > +         implementation can be chosen at runtime.
> > > +
> >
> > COMBO SPINLOCK has side effects, which would expand spinlock code size
> > a lot. Ref: ARCH_INLINE_SPIN_LOCK
> >
> > So, we shouldn't remove the three configs' selection.
> >
> > +choice
> > + prompt "RISC-V spinlock type"
> > + default RISCV_COMBO_SPINLOCKS
> > +
> > +config RISCV_TICKET_SPINLOCKS
> > + bool "Using ticket spinlock"
> > +
> > +config RISCV_QUEUED_SPINLOCKS
> > + bool "Using queued spinlock"
> > + depends on SMP && MMU
> > + select ARCH_USE_QUEUED_SPINLOCKS
> > + help
> > +  Make sure your micro arch give cmpxchg/xchg forward progress
> > +  guarantee. Otherwise, stay at ticket-lock.
> > +
> > +config RISCV_COMBO_SPINLOCKS
> > + bool "Using combo spinlock"
> > + depends on SMP && MMU
> > + select ARCH_USE_QUEUED_SPINLOCKS
> > + help
> > +  Select queued spinlock or ticket-lock by cmdline.
> > +endchoice
> > +
> >
> >
> >
> > >  config RISCV_ALTERNATIVE
> > >         bool
> > >         depends on !XIP_KERNEL
> > > diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild
> > > index 504f8b7e72d4..ad72f2bd4cc9 100644
> > > --- a/arch/riscv/include/asm/Kbuild
> > > +++ b/arch/riscv/include/asm/Kbuild
> > > @@ -2,10 +2,12 @@
> > >  generic-y += early_ioremap.h
> > >  generic-y += flat.h
> > >  generic-y += kvm_para.h
> > > +generic-y += mcs_spinlock.h
> > >  generic-y += parport.h
> > > -generic-y += spinlock.h
> > >  generic-y += spinlock_types.h
> > > +generic-y += ticket_spinlock.h
> > >  generic-y += qrwlock.h
> > >  generic-y += qrwlock_types.h
> > > +generic-y += qspinlock.h
> > >  generic-y += user.h
> > >  generic-y += vmlinux.lds.h
> > > diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h
> > > new file mode 100644
> > > index 000000000000..4856d50006f2
> > > --- /dev/null
> > > +++ b/arch/riscv/include/asm/spinlock.h
> > > @@ -0,0 +1,39 @@
> > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > +
> > > +#ifndef __ASM_RISCV_SPINLOCK_H
> > > +#define __ASM_RISCV_SPINLOCK_H
> > > +
> > > +#ifdef CONFIG_RISCV_COMBO_SPINLOCKS
> > > +#define _Q_PENDING_LOOPS       (1 << 9)
> > > +
> > > +#define __no_arch_spinlock_redefine
> > > +#include <asm/ticket_spinlock.h>
> > > +#include <asm/qspinlock.h>
> > > +#include <asm/alternative.h>
> > > +
> > > +DECLARE_STATIC_KEY_TRUE(qspinlock_key);
> > > +
> > > +#define SPINLOCK_BASE_DECLARE(op, type, type_lock)                     \
> > > +static __always_inline type arch_spin_##op(type_lock lock)             \
> > > +{                                                                      \
> > > +       if (static_branch_unlikely(&qspinlock_key))                     \
> > > +               return queued_spin_##op(lock);                          \
> > > +       return ticket_spin_##op(lock);                                  \
> > > +}
> > > +
> > > +SPINLOCK_BASE_DECLARE(lock, void, arch_spinlock_t *)
> > > +SPINLOCK_BASE_DECLARE(unlock, void, arch_spinlock_t *)
> > > +SPINLOCK_BASE_DECLARE(is_locked, int, arch_spinlock_t *)
> > > +SPINLOCK_BASE_DECLARE(is_contended, int, arch_spinlock_t *)
> > > +SPINLOCK_BASE_DECLARE(trylock, bool, arch_spinlock_t *)
> > > +SPINLOCK_BASE_DECLARE(value_unlocked, int, arch_spinlock_t)
> > > +
> > > +#else
> > > +
> > > +#include <asm/ticket_spinlock.h>
> > > +
> > > +#endif
> > > +
> > > +#include <asm/qrwlock.h>
> > > +
> > > +#endif /* __ASM_RISCV_SPINLOCK_H */
> > > diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
> > > index 4f73c0ae44b2..929bccd4c9e5 100644
> > > --- a/arch/riscv/kernel/setup.c
> > > +++ b/arch/riscv/kernel/setup.c
> > > @@ -244,6 +244,26 @@ static void __init parse_dtb(void)
> > >  #endif
> > >  }
> > >
> > > +DEFINE_STATIC_KEY_TRUE(qspinlock_key);
> > > +EXPORT_SYMBOL(qspinlock_key);
> > > +
> > > +static void __init riscv_spinlock_init(void)
> > > +{
> > > +       asm goto(ALTERNATIVE("j %[no_zacas]", "nop", 0, RISCV_ISA_EXT_ZACAS, 1)
> > > +                : : : : no_zacas);
> > > +       asm goto(ALTERNATIVE("nop", "j %[qspinlock]", 0, RISCV_ISA_EXT_ZABHA, 1)
> > > +                : : : : qspinlock);
> > The requirement of qspinlock concerns the forward progress guarantee
> > in micro-arch design, which includes cmpxchg_loop & xchg_tail. So, I
> > don't think these features have a relationship with Qspinlock.
> >
> > If your machine doesn't have enough stickiness for a young exclusive
> > cacheline, fall back to ticket_lock.
>
> Could we use "Ziccrse: Main memory supports forward progress on LR/SC
> sequences" for qspinlock selection?
>
> +       asm goto(ALTERNATIVE("nop", "j %[qspinlock]", 0,
> RISCV_ISA_EXT_ZICCRSE, 1)
>                 : : : : qspinlock);
>
> [1] https://github.com/riscv/riscv-profiles/blob/main/src/rva23-profile.adoc

Yes, I'll do that, thanks for the pointer!

Thanks,

Alex

>
> >
> > > +
> > > +no_zacas:
> > > +       static_branch_disable(&qspinlock_key);
> > > +       pr_info("Ticket spinlock: enabled\n");
> > > +
> > > +       return;
> > > +
> > > +qspinlock:
> > > +       pr_info("Queued spinlock: enabled\n");
> > > +}
> > > +
> > >  extern void __init init_rt_signal_env(void);
> > >
> > >  void __init setup_arch(char **cmdline_p)
> > > @@ -295,6 +315,7 @@ void __init setup_arch(char **cmdline_p)
> > >         riscv_set_dma_cache_alignment();
> > >
> > >         riscv_user_isa_enable();
> > > +       riscv_spinlock_init();
> > >  }
> > >
> > >  bool arch_cpu_is_hotpluggable(int cpu)
> > > diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
> > > index 0655aa5b57b2..bf47cca2c375 100644
> > > --- a/include/asm-generic/qspinlock.h
> > > +++ b/include/asm-generic/qspinlock.h
> > > @@ -136,6 +136,7 @@ static __always_inline bool virt_spin_lock(struct qspinlock *lock)
> > >  }
> > >  #endif
> > >
> > > +#ifndef __no_arch_spinlock_redefine
> > >  /*
> > >   * Remapping spinlock architecture specific functions to the corresponding
> > >   * queued spinlock functions.
> > > @@ -146,5 +147,6 @@ static __always_inline bool virt_spin_lock(struct qspinlock *lock)
> > >  #define arch_spin_lock(l)              queued_spin_lock(l)
> > >  #define arch_spin_trylock(l)           queued_spin_trylock(l)
> > >  #define arch_spin_unlock(l)            queued_spin_unlock(l)
> > > +#endif
> > >
> > >  #endif /* __ASM_GENERIC_QSPINLOCK_H */
> > > diff --git a/include/asm-generic/ticket_spinlock.h b/include/asm-generic/ticket_spinlock.h
> > > index cfcff22b37b3..325779970d8a 100644
> > > --- a/include/asm-generic/ticket_spinlock.h
> > > +++ b/include/asm-generic/ticket_spinlock.h
> > > @@ -89,6 +89,7 @@ static __always_inline int ticket_spin_is_contended(arch_spinlock_t *lock)
> > >         return (s16)((val >> 16) - (val & 0xffff)) > 1;
> > >  }
> > >
> > > +#ifndef __no_arch_spinlock_redefine
> > >  /*
> > >   * Remapping spinlock architecture specific functions to the corresponding
> > >   * ticket spinlock functions.
> > > @@ -99,5 +100,6 @@ static __always_inline int ticket_spin_is_contended(arch_spinlock_t *lock)
> > >  #define arch_spin_lock(l)              ticket_spin_lock(l)
> > >  #define arch_spin_trylock(l)           ticket_spin_trylock(l)
> > >  #define arch_spin_unlock(l)            ticket_spin_unlock(l)
> > > +#endif
> > >
> > >  #endif /* __ASM_GENERIC_TICKET_SPINLOCK_H */
> > > --
> > > 2.39.2
> > >
> >
> >
> > --
> > Best Regards
> >  Guo Ren
>
>
>
> --
> Best Regards
>  Guo Ren





[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux