Re: [PATCH 2/2] KVM: x86: Add kvm_x86_ops callback to allow VMX to stash away CR4.VMXE

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 28, 2019 at 10:42:31AM +0100, Vitaly Kuznetsov wrote:
> Sean Christopherson <sean.j.christopherson@xxxxxxxxx> writes:

...

> > +static void vmx_pre_smi_save_state(struct kvm_vcpu *vcpu)
> > +{
> > +	struct vcpu_vmx *vmx = to_vmx(vcpu);
> > +
> > +	if (kvm_read_cr4(vcpu) & X86_CR4_VMXE) {
> > +		vmx->nested.smm.cr4_vmxe = true;
> > +		WARN_ON(vmx_set_cr4(vcpu, kvm_read_cr4(vcpu) & ~X86_CR4_VMXE));
> 
> This WARN_ON fires: vmx_set_cr4() has the following check:
> 
> if (to_vmx(vcpu)->nested.vmxon && !nested_cr4_valid(vcpu, cr4))
>         return 1;
> 
> X86_CR4_VMXE can't be unset while nested.vmxon is on....

Blech.  Moving the handling of nested.vmxon here is somewhat doable,
but holy cow does it become a jumbled mess.  Rather than trying to
constantly juggle HF_SMM_MASK, I have a more ambitious idea:

Clear HF_SMM_MASK before loading state.

RFC incoming...

> > +	}
> > +}
> > +
> >  static int vmx_post_smi_save_state(struct kvm_vcpu *vcpu, char *smstate)
> >  {
> >  	struct vcpu_vmx *vmx = to_vmx(vcpu);
> > @@ -7390,6 +7400,16 @@ static int vmx_pre_rsm_load_state(struct kvm_vcpu *vcpu, u64 smbase)
> >  	return 0;
> >  }
> >  
> > +static void vmx_post_rsm_load_state(struct kvm_vcpu *vcpu)
> > +{
> > +	struct vcpu_vmx *vmx = to_vmx(vcpu);
> > +
> > +	if (vmx->nested.smm.cr4_vmxe) {
> > +		WARN_ON(vmx_set_cr4(vcpu, kvm_read_cr4(vcpu) | X86_CR4_VMXE));
> > +		vmx->nested.smm.cr4_vmxe = false;
> 
> If we manage to pass the previous problem this will likely fail:
> 
> post_rsm_load_state() is called with HF_SMM_MASK still set.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux