Re: [PATCH v4 4/6] KVM: nVMX: treat CR4.VMXE as reserved in SMM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/10/2017 14:17, Ladi Prosek wrote:
> Intel SDM 34.14.3 Protection of CR4.VMXE in SMM:
> 
> "Under the default treatment, CR4.VMXE is treated as a reserved bit while a
> logical processor is in SMM. Any attempt by software running in SMM to set
> this bit causes a general-protection exception."
> 
> em_rsm() may set CR4.VMXE as part of its loading of saved SMM state so the
> the SMM flag is temporarily cleared in the affected function.
> 
> Signed-off-by: Ladi Prosek <lprosek@xxxxxxxxxx>
> ---
>  arch/x86/kvm/emulate.c | 32 ++++++++++++++++++++++++--------
>  arch/x86/kvm/vmx.c     |  4 ++++
>  2 files changed, 28 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
> index 1e6a8a824b8b..2d0b9dcfb812 100644
> --- a/arch/x86/kvm/emulate.c
> +++ b/arch/x86/kvm/emulate.c
> @@ -2406,7 +2406,14 @@ static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, u64 smbase, int n)
>  static int rsm_enter_protected_mode(struct x86_emulate_ctxt *ctxt,
>  				     u64 cr0, u64 cr4)
>  {
> -	int bad;
> +	int bad, ret = X86EMUL_CONTINUE;
> +
> +	/*
> +	 * Temporarily clear the SMM flag so we don't trip over the
> +	 * no-CR4.VMXE-in-SMM check.
> +	 */
> +	ctxt->ops->set_hflags(ctxt, ctxt->ops->get_hflags(ctxt) &
> +		~X86EMUL_SMM_MASK);

Maybe don't set VMXE here and set it in post_leave_smm?  I can just
leave out this patch and let you redo this as a follow up.

Paolo

>  	/*
>  	 * First enable PAE, long mode needs it before CR0.PG = 1 is set.
> @@ -2414,20 +2421,29 @@ static int rsm_enter_protected_mode(struct x86_emulate_ctxt *ctxt,
>  	 * if EFER.LMA=0, so set it separately.
>  	 */
>  	bad = ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PCIDE);
> -	if (bad)
> -		return X86EMUL_UNHANDLEABLE;
> +	if (bad) {
> +		ret = X86EMUL_UNHANDLEABLE;
> +		goto out;
> +	}
>  
>  	bad = ctxt->ops->set_cr(ctxt, 0, cr0);
> -	if (bad)
> -		return X86EMUL_UNHANDLEABLE;
> +	if (bad) {
> +		ret = X86EMUL_UNHANDLEABLE;
> +		goto out;
> +	}
>  
>  	if (cr4 & X86_CR4_PCIDE) {
>  		bad = ctxt->ops->set_cr(ctxt, 4, cr4);
> -		if (bad)
> -			return X86EMUL_UNHANDLEABLE;
> +		if (bad) {
> +			ret = X86EMUL_UNHANDLEABLE;
> +			goto out;
> +		}
>  	}
> +out:
> +	ctxt->ops->set_hflags(ctxt, ctxt->ops->get_hflags(ctxt) |
> +		X86EMUL_SMM_MASK);
>  
> -	return X86EMUL_CONTINUE;
> +	return ret;
>  }
>  
>  static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt, u64 smbase)
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index ace5ca6bc41a..f255038d5a91 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -4364,6 +4364,10 @@ static int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
>  		 */
>  		if (!nested_vmx_allowed(vcpu))
>  			return 1;
> +
> +		/* cr4.VMXE is a reserved bit in SMM */
> +		if (is_smm(vcpu))
> +			return 1;
>  	}
>  
>  	if (to_vmx(vcpu)->nested.vmxon && !nested_cr4_valid(vcpu, cr4))
> 




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux