Re: Regression with nested HyperV VM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> writes:

>
> In other words: work in progress, stay tuned.

It seems I was able to identify the culprit,

commit 5bea5123cbf08f990a1aee8f08c643a272e06a0f
Author: Paolo Bonzini <pbonzini@xxxxxxxxxx>
Date:   Tue Sep 18 15:19:17 2018 +0200

    KVM: VMX: check nested state and CR4.VMXE against SMM

this also matches Jon's observation that the regression happened in
4.19.

In case I'm not mistaken, the issue is: while trying to forbid setting
VMXE from SMM we forgot about RSM which in case VMXE was set before
going into SMM is supposed to restore it. My first RFC would then be to
move the check out of vmx_set_cr4() to handle_set_cr4():

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 02cf8a551bd1..1f2724c5e99d 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2901,17 +2901,13 @@ int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
                                        SECONDARY_EXEC_DESC);
        }
 
-       if (cr4 & X86_CR4_VMXE) {
-               /*
-                * To use VMXON (and later other VMX instructions), a guest
-                * must first be able to turn on cr4.VMXE (see handle_vmon()).
-                * So basically the check on whether to allow nested VMX
-                * is here.  We operate under the default treatment of SMM,
-                * so VMX cannot be enabled under SMM.
-                */
-               if (!nested_vmx_allowed(vcpu) || is_smm(vcpu))
-                       return 1;
-       }
+       /*
+        * To use VMXON (and later other VMX instructions), a guest must first
+        * be able to turn on cr4.VMXE (see handle_vmon()).  So basically the
+        * check on whether to allow nested VMX is here.
+        */
+       if (cr4 & X86_CR4_VMXE && !nested_vmx_allowed(vcpu))
+               return 1;
 
        if (to_vmx(vcpu)->nested.vmxon && !nested_cr4_valid(vcpu, cr4))
                return 1;
@@ -4627,6 +4623,10 @@ static int handle_set_cr0(struct kvm_vcpu *vcpu, unsigned long val)
 
 static int handle_set_cr4(struct kvm_vcpu *vcpu, unsigned long val)
 {
+       /* VMXE can't be enabled from SMM */
+       if (val & X86_CR4_VMXE && is_smm(vcpu))
+               return 1;
+
        if (is_guest_mode(vcpu)) {
                struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
                unsigned long orig_val = val;


Paolo, as the author of the original commit, what do you think? Would
this protection be enough or do you envision any other scenarios?

-- 
Vitaly



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux