Re: [PATCH v15 3/6] locking/qspinlock: Introduce CNA into the slow path of qspinlock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Aug 4, 2023 at 9:33 AM Guo Ren <guoren@xxxxxxxxxx> wrote:
>
> On Thu, Aug 3, 2023 at 7:57 PM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> >
> > On Thu, Aug 03, 2023 at 06:28:51PM +0800, Guo Ren wrote:
> > > On Thu, Aug 3, 2023 at 4:50 PM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> > > >
> > > > On Wed, Aug 02, 2023 at 07:14:05PM -0400, Guo Ren wrote:
> > > >
> > > > > The pv_ops is belongs to x86 custom frame work, and it prevent other
> > > > > architectures connect to the CNA spinlock.
> > > >
> > > > static_call() exists as a arch neutral variant of this.
> > > Emm... we have used static_call() in the riscv queued_spin_lock_:
> > > https://lore.kernel.org/all/20230802164701.192791-20-guoren@xxxxxxxxxx/
> >
> > Yeah, I think I saw that land in the INBOX, just haven't had time to
> > look at it.
> >
> > > But we met a compile problem:
> > >
> > >   GEN     .vmlinux.objs
> > >   MODPOST Module.symvers
> > > ERROR: modpost: "__SCK__pv_queued_spin_unlock" [arch/riscv/kvm/kvm.ko]
> > > undefined!
> > > ERROR: modpost: "__SCK__pv_queued_spin_unlock"
> > > [kernel/locking/locktorture.ko] undefined!
> > > ERROR: modpost: "__SCK__pv_queued_spin_unlock" [mm/z3fold.ko] undefined!
> > > ERROR: modpost: "__SCK__pv_queued_spin_unlock"
> > > [fs/nfs_common/grace.ko] undefined!
> > > ERROR: modpost: "__SCK__pv_queued_spin_unlock" [fs/quota/quota_v1.ko] undefined!
> > > ERROR: modpost: "__SCK__pv_queued_spin_unlock" [fs/quota/quota_v2.ko] undefined!
> > > ERROR: modpost: "__SCK__pv_queued_spin_unlock"
> > > [fs/quota/quota_tree.ko] undefined!
> > > ERROR: modpost: "__SCK__pv_queued_spin_unlock" [fs/fuse/virtiofs.ko] undefined!
> > > ERROR: modpost: "__SCK__pv_queued_spin_unlock" [fs/dlm/dlm.ko] undefined!
> > > ERROR: modpost: "__SCK__pv_queued_spin_unlock" [fs/fscache/fscache.ko]
> > > undefined!
> > > WARNING: modpost: suppressed 839 unresolved symbol warnings because
> > > there were too many)
> > > /home/guoren/source/kernel/linux/scripts/Makefile.modpost:144: recipe
> > > for target 'Module.symvers' failed
> > >
> > > Our solution is:
> > > EXPORT_SYMBOL(__SCK__pv_queued_spin_unlock);
> > >
> > > What do you think about it?
> >
> > Could be you're not using static_call_mod() to go with
> > EXPORT_STATIC_CALL_TRAMP()
> Thx, that's what I want.
>
> >
> > > > > I'm working on riscv qspinlock on sg2042 64 cores 2/4 NUMA nodes
> > > > > platforms. Here are the patches about riscv CNA qspinlock:
> > > > > https://lore.kernel.org/linux-riscv/20230802164701.192791-19-guoren@xxxxxxxxxx/
> > > > >
> > > > > What's the next plan for this patch series? I think the two-queue design
> > > > > has satisfied most platforms with two NUMA nodes.
> > > >
> > > > What has been your reason for working on CNA? What lock has been so
> > > > contended you need this?
> > > I wrote the reason here:
> > > https://lore.kernel.org/all/20230802164701.192791-1-guoren@xxxxxxxxxx/
> > >
> > > The target platform is: https://www.sophon.ai/
> > >
> > > The two NUMA nodes platform has come out, so we want to measure the
> > > benefit of CNA qspinlock.
> >
> > CNA should only show a benefit when there is strong inter-node
> > contention, and in that case it is typically best to fix the kernel side
> > locking.
> >
> > Hence the question as to what lock prompted you to look at this.
> I met the long lock queue situation when the hardware gave an overly
> aggressive store queue merge buffer delay mechanism. See:
> https://lore.kernel.org/linux-riscv/20230802164701.192791-8-guoren@xxxxxxxxxx/
>
> This also let me consider improving the efficiency of the long lock
> queue release. For example, if the queue is like this:
>
> (Node0 cpu0) -> (Node1 cpu64) -> (Node0 cpu1) -> (Node1 cpu65) ->
> (Node0 cpu2) -> (Node1 cpu66) -> ...
>
> Then every mcs_unlock would cause a cross-NUMA transaction. But if we
> could make the queue like this:
>
> (Node0 cpu0) -> (Node0 cpu1) -> (Node0 cpu2) -> (Node1 cpu65) ->
> (Node1 cpu66) -> (Node1 cpu64) -> ...
>
> Only one cross-NUMA transaction is needed. Although it would cause
> starvation problems, qspinlock.numa_spinlock_threshold_ns could give a
> basic guarantee.
I thought it was a tradeoff for the balance between fairness and efficiency.

>
> --
> Best Regards
>  Guo Ren



-- 
Best Regards
 Guo Ren




[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux