Re: [PATCH v4] x86/paravirt: Disable virt spinlock on bare metal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Bibo,

On 2024-08-01 at 16:00:19 +0800, maobibo wrote:
> Chenyu,
> 
> I do not know much about x86, just give some comments(probably incorrected)
> from the code.
> 
> On 2024/7/29 下午2:52, Chen Yu wrote:
> > X86_FEATURE_HYPERVISOR         Y    Y    Y     N
> > CONFIG_PARAVIRT_SPINLOCKS      Y    Y    N     Y/N
> > PV spinlock                    Y    N    N     Y/N
> > 
> > virt_spin_lock_key             N    N    Y     N
> > 
> > -DECLARE_STATIC_KEY_TRUE(virt_spin_lock_key);
> > +DECLARE_STATIC_KEY_FALSE(virt_spin_lock_key);
> 
> @@ -87,7 +87,7 @@ static inline bool virt_spin_lock(struct qspinlock *lock)
>  {
>         int val;
> 
> -       if (!static_branch_likely(&virt_spin_lock_key))
> +       if (!static_branch_unlikely(&virt_spin_lock_key))
>                 return false;
> 
> Do we need change it with static_branch_unlikely() if value of key is false
> by fault?

My understanding is that, firstly, whether it is likely() or unlikely()
depends on the 'expected' value of the key, rather than its default
initialized value. The compiler can arrange the if 'jump' condition to
avoid the overhead of branch jump(to keep the instruction pipeline)
as much as possible. Secondly, before this patch, the 'expected' value
of virt_spin_lock_key is 'true', so I'm not sure if we should change
its behavior. Although it seems that in most VM guest, with para-virt
spinlock enabled, this key should be false at most time, but just in
case of any regression...

> >   /*
> >    * Shortcut for the queued_spin_lock_slowpath() function that allows
> > diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
> > index 5358d43886ad..fec381533555 100644
> > --- a/arch/x86/kernel/paravirt.c
> > +++ b/arch/x86/kernel/paravirt.c
> > @@ -51,13 +51,12 @@ DEFINE_ASM_FUNC(pv_native_irq_enable, "sti", .noinstr.text);
> >   DEFINE_ASM_FUNC(pv_native_read_cr2, "mov %cr2, %rax", .noinstr.text);
> >   #endif
> > -DEFINE_STATIC_KEY_TRUE(virt_spin_lock_key);
> > +DEFINE_STATIC_KEY_FALSE(virt_spin_lock_key);
> >   void __init native_pv_lock_init(void)
> >   {
> > -	if (IS_ENABLED(CONFIG_PARAVIRT_SPINLOCKS) &&
> > -	    !boot_cpu_has(X86_FEATURE_HYPERVISOR))
> > -		static_branch_disable(&virt_spin_lock_key);
> > +	if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
> > +		static_branch_enable(&virt_spin_lock_key);
> >   }
> 
> From my point, the sentence static_branch_disable(&virt_spin_lock_key) can
> be removed in file arch/x86/xen/spinlock.c and arch/x86/kernel/kvm.c, since
> function native_smp_prepare_boot_cpu() is already called by
> xen_smp_prepare_boot_cpu() and kvm_smp_prepare_boot_cpu().
>

The key is enabled by native_smp_prepare_boot_cpu() for VM guest as
the initial value(default to true). And later either arch/x86/xen/spinlock.c
or arch/x86/kernel/kvm.c disable this key in a on-demand manner.

thanks,
Chenyu




[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux