Re: [PATCH 2/2] KVM: x86: Add kvm_x86_ops callback to allow VMX to stash away CR4.VMXE

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 27, 2019 at 12:31 PM Sean Christopherson
<sean.j.christopherson@xxxxxxxxx> wrote:
>
> Intel CPUs manually save and load CR4.VMXE internally instead of having
> it automatically switched via the SMM State Save.  Specifically, the
> SDM uses the following pseudocode to describe SMI and RSM:
>
> SMI:
>   enter SMM;
>   save the following internal to the processor:
>     CR4.VMXE
>     an indication of whether the logical processor was in VMX operation (root or non-root)
>   IF the logical processor is in VMX operation THEN
>     save current VMCS pointer internal to the processor;
>     leave VMX operation;
>     save VMX-critical state defined below
>   FI;
>   CR4.VMXE <- 0;
>   perform ordinary SMI delivery:
>   save processor state in SMRAM;
>   set processor state to standard SMM values;
>
> RSM:
>   IF VMXE = 1 in CR4 image in SMRAM THEN
>     fail and enter shutdown state;
>   ELSE
>     restore state normally from SMRAM;
>     <unrelated stuff omitted for brevity>
>   FI;
>   CR4.VMXE <- value stored internally;
>   <more stuff omitted for brevity>
>   leave SMM;
>
> Presumably, the manual handling of CR4.VMXE is done to avoid having a
> to special case CR4.VMXE in the legacy state save/load flows.  As luck
> would have it, KVM has a similar conundrum in its SMM state save/load
> flows in that vmx_set_cr4() forbids CR4.VMXE from being set while in
> SMM.  The check in vmx_set_cr4() can cause RSM to fail when loading
> CR4 from the SMM save state area.
>
> Take a page out of the SDM and manually handle CR4.VMXE during SMI and
> RSM.  Alternatively, HF_SMM_MASK could be temporarily disabled when
> setting CR4 as part of RSM, but doing so is rather ugly due to the need
> to plumb the "from_rsm" information through to emulator_set_cr(), and
> also opens up a virtualization hole, e.g. KVM would not properly handle
> the (extremely unlikely) scenario where the guest toggles CR4.VMXE in
> the SMM save state area from 0->1
>
> Reported-by: Jon Doron <arilou@xxxxxxxxx>
> Suggested-by: Jim Mattson <jmattson@xxxxxxxxxx>
> Cc: Liran Alon <liran.alon@xxxxxxxxxx>
> Cc: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>
> Fixes: 5bea5123cbf0 ("KVM: VMX: check nested state and CR4.VMXE against SMM")
> Signed-off-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx>

Thanks! This looks like the right way to do it, but I'm probably biased.

> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> index a1e00d0a2482..a3444625ca7f 100644
> --- a/arch/x86/kvm/vmx/vmx.h
> +++ b/arch/x86/kvm/vmx/vmx.h
> @@ -162,6 +162,8 @@ struct nested_vmx {
>
>         /* SMM related state */
>         struct {
> +               /* CR4.VMXE=1 on SMM entry? */
> +               bool cr4_vmxe;
>                 /* in VMX operation on SMM entry? */
>                 bool vmxon;
>                 /* in guest mode on SMM entry? */

Where does this state get saved/restored when suspending/resuming a
VCPU that's in SMM?



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux