Hi Nikolay, On 2024-06-18 at 11:24:42 +0300, Nikolay Borisov wrote: > > > On 26.05.24 г. 4:58 ч., Chen Yu wrote: > > The kernel can change spinlock behavior when running as a guest. But > > this guest-friendly behavior causes performance problems on bare metal. > > So there's a 'virt_spin_lock_key' static key to switch between the two > > modes. > > > > The static key is always enabled by default (run in guest mode) and > > should be disabled for bare metal (and in some guests that want native > > behavior). > > > > Performance drop is reported when running encode/decode workload and > > BenchSEE cache sub-workload. > > Bisect points to commit ce0a1b608bfc ("x86/paravirt: Silence unused > > native_pv_lock_init() function warning"). When CONFIG_PARAVIRT_SPINLOCKS > > is disabled the virt_spin_lock_key is incorrectly set to true on bare > > metal. The qspinlock degenerates to test-and-set spinlock, which > > decrease the performance on bare metal. > > > > Fix this by disabling virt_spin_lock_key if it is on bare metal, > > regardless of CONFIG_PARAVIRT_SPINLOCKS. > > > > nit: > > This bug wouldn't have happened if the key was defined FALSE by default and > only enabled in the appropriate case. I think it makes more sense to invert > the logic and have the key FALSE by default and only enable it iff the > kernel is running under a hypervisor... At worst only the virtualization > case would suffer if the lock is falsely not enabled. Thank you for your review. I agree, initializing the key to FALSE by default seems to be more readable. Could this change be the subsequent adjustment based on current fix, which could be more bisectible? Set the default key to false. If booting in a VM, enable this key. Later during the VM initialization, if other high-efficient spinlock is preferred, like paravirt-spinlock, the virt_spin_lock_key will be disabled accordingly. diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h index cde8357bb226..a7d3ba00e70e 100644 --- a/arch/x86/include/asm/qspinlock.h +++ b/arch/x86/include/asm/qspinlock.h @@ -66,13 +66,13 @@ static inline bool vcpu_is_preempted(long cpu) #ifdef CONFIG_PARAVIRT /* - * virt_spin_lock_key - enables (by default) the virt_spin_lock() hijack. + * virt_spin_lock_key - disables (by default) the virt_spin_lock() hijack. * * Native (and PV wanting native due to vCPU pinning) should disable this key. * It is done in this backwards fashion to only have a single direction change, * which removes ordering between native_pv_spin_init() and HV setup. */ -DECLARE_STATIC_KEY_TRUE(virt_spin_lock_key); +DECLARE_STATIC_KEY_FALSE(virt_spin_lock_key); /* * Shortcut for the queued_spin_lock_slowpath() function that allows diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index c193c9e60a1b..fec381533555 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -51,12 +51,12 @@ DEFINE_ASM_FUNC(pv_native_irq_enable, "sti", .noinstr.text); DEFINE_ASM_FUNC(pv_native_read_cr2, "mov %cr2, %rax", .noinstr.text); #endif -DEFINE_STATIC_KEY_TRUE(virt_spin_lock_key); +DEFINE_STATIC_KEY_FALSE(virt_spin_lock_key); void __init native_pv_lock_init(void) { - if (!boot_cpu_has(X86_FEATURE_HYPERVISOR)) - static_branch_disable(&virt_spin_lock_key); + if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) + static_branch_enable(&virt_spin_lock_key); } static void native_tlb_remove_table(struct mmu_gather *tlb, void *table) -- 2.25.1