Hey Radim, On Thu, Nov 09, 2017 at 03:17:33PM +0100, Radim Krčmář wrote: <cut> > > This is what I'm doubting, because the patch is adding about two > thousand cycles to every spinlock-taken path. > Doesn't this patch yield better results? > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > index 3df743b60c80..d9225e48c11a 100644 > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -676,6 +676,12 @@ void __init kvm_spinlock_init(void) > { > if (!kvm_para_available()) > return; > + > + if (kvm_para_has_feature(KVM_FEATURE_PV_DEDICATED)) { > + static_branch_disable(&virt_spin_lock_key); > + return; > + } > + Yes, the above suggestion is a much better approach. The code has probably changed from the time I wrote the first version. I will refresh with the above suggestion. > /* Does host kernel support KVM_FEATURE_PV_UNHALT? */ > if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) > return; > > > However, the key aspect > > here is this patch gives a way for the host to instruct the guest to use qspinlock. > > Even with Longman's patch which allows guest to select the spinlock implementation, > > there should still be the auto-select mode. In such mode, PV_DEDICATED should > > allow the host to get the guest to use qspinlock, without, the guest will fallback > > to tas when PV_UNHALT == 0. > > I agree that a flag can be useful for certains setups. Cool! > -- All the best, Eduardo Valentin -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html