Re: [PATCH] KVM: nVMX: Don't leak L1 MMIO regions to L2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



As usual, nothing to say about the behavior, just about the code...

On 04/10/19 19:52, Jim Mattson wrote:
> + * Returns:
> + *   0 - success, i.e. proceed with actual VMEnter
> + *  -EFAULT - consistency check VMExit
> + *  -EINVAL - consistency check VMFail
> + *  -ENOTSUPP - kvm internal error
>   */

... the error codes do not mean much here.  Can you define an enum instead?

(I also thought about passing the exit reason, where bit 31 could be
used to distinguish VMX instruction failure from an entry failure
VMexit, which sounds cleaner if you just look at the prototype but
becomes messy fairly quickly because you have to pass back the exit
qualification too.  The from_vmentry argument could become u32
*p_exit_qual and be NULL if not called from VMentry, but it doesn't seem
worthwhile at all).

Thanks,

Paolo

>  int nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu, bool from_vmentry)
>  {
> @@ -3045,6 +3044,7 @@ int nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu, bool from_vmentry)
>  	bool evaluate_pending_interrupts;
>  	u32 exit_reason = EXIT_REASON_INVALID_STATE;
>  	u32 exit_qual;
> +	int r;
>  
>  	evaluate_pending_interrupts = exec_controls_get(vmx) &
>  		(CPU_BASED_VIRTUAL_INTR_PENDING | CPU_BASED_VIRTUAL_NMI_PENDING);
> @@ -3081,11 +3081,13 @@ int nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu, bool from_vmentry)
>  	prepare_vmcs02_early(vmx, vmcs12);
>  
>  	if (from_vmentry) {
> -		nested_get_vmcs12_pages(vcpu);
> +		r = nested_get_vmcs12_pages(vcpu);
> +		if (unlikely(r))
> +			return r;
>  
>  		if (nested_vmx_check_vmentry_hw(vcpu)) {
>  			vmx_switch_vmcs(vcpu, &vmx->vmcs01);
> -			return -1;
> +			return -EINVAL;
>  		}
>  
>  		if (nested_vmx_check_guest_state(vcpu, vmcs12, &exit_qual))
> @@ -3165,14 +3167,14 @@ int nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu, bool from_vmentry)
>  	vmx_switch_vmcs(vcpu, &vmx->vmcs01);
>  
>  	if (!from_vmentry)
> -		return 1;
> +		return -EFAULT;
>  
>  	load_vmcs12_host_state(vcpu, vmcs12);
>  	vmcs12->vm_exit_reason = exit_reason | VMX_EXIT_REASONS_FAILED_VMENTRY;
>  	vmcs12->exit_qualification = exit_qual;
>  	if (enable_shadow_vmcs || vmx->nested.hv_evmcs)
>  		vmx->nested.need_vmcs12_to_shadow_sync = true;
> -	return 1;
> +	return -EFAULT;
>  }
>  
>  /*
> @@ -3246,11 +3248,13 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
>  	vmx->nested.nested_run_pending = 1;
>  	ret = nested_vmx_enter_non_root_mode(vcpu, true);
>  	vmx->nested.nested_run_pending = !ret;
> -	if (ret > 0)
> -		return 1;
> -	else if (ret)
> +	if (ret == -EINVAL)
>  		return nested_vmx_failValid(vcpu,
>  			VMXERR_ENTRY_INVALID_CONTROL_FIELD);
> +	else if (ret == -ENOTSUPP)
> +		return 0;
> +	else if (ret)
> +		return 1;
>  
>  	/* Hide L1D cache contents from the nested guest.  */
>  	vmx->vcpu.arch.l1tf_flush_l1d = true;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index e6b5cfe3c345..e8b04560f064 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -7931,8 +7931,13 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>  	bool req_immediate_exit = false;
>  
>  	if (kvm_request_pending(vcpu)) {
> -		if (kvm_check_request(KVM_REQ_GET_VMCS12_PAGES, vcpu))
> -			kvm_x86_ops->get_vmcs12_pages(vcpu);
> +		if (kvm_check_request(KVM_REQ_GET_VMCS12_PAGES, vcpu)) {
> +			r = kvm_x86_ops->get_vmcs12_pages(vcpu);
> +			if (unlikely(r)) {
> +				r = 0;
> +				goto out;
> +			}
> +		}
>  		if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu))
>  			kvm_mmu_unload(vcpu);
>  		if (kvm_check_request(KVM_REQ_MIGRATE_TIMER, vcpu))





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux