Re: [PATCH v2] KVM: nVMX: Fix loss of pending IRQ/NMI before entering L2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2018-08-30 12:57+0300, Liran Alon:
> Consider the case L1 had a IRQ/NMI event until it executed
> VMLAUNCH/VMRESUME which wasn't delivered because it was disallowed
> (e.g. interrupts disabled). When L1 executes VMLAUNCH/VMRESUME,
> L0 needs to evaluate if this pending event should cause an exit from
> L2 to L1 or delivered directly to L2 (e.g. In case L1 don't intercept
> EXTERNAL_INTERRUPT).
> 
> Usually this would be handled by L0 requesting a IRQ/NMI window
> by setting VMCS accordingly. However, this setting was done on
> VMCS01 and now VMCS02 is active instead. Thus, when L1 executes
> VMLAUNCH/VMRESUME we force L0 to perform pending event evaluation by
> requesting a KVM_REQ_EVENT.
> 
> Note that above scenario exists when L1 KVM is about to enter L2 but
> requests an "immediate-exit". As in this case, L1 will
> disable-interrupts and then send a self-IPI before entering L2.
> 
> Co-authored-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx>
> Signed-off-by: Liran Alon <liran.alon@xxxxxxxxxx>
> ---
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> @@ -12574,6 +12577,25 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu, u32 *exit_qual)
>  		kvm_make_request(KVM_REQ_GET_VMCS12_PAGES, vcpu);
>  	}
>  
> +	/*
> +	 * If L1 had a pending IRQ/NMI until it executed
> +	 * VMLAUNCH/VMRESUME which wasn't delivered because it was
> +	 * disallowed (e.g. interrupts disabled), L0 needs to
> +	 * evaluate if this pending event should cause an exit from L2
> +	 * to L1 or delivered directly to L2 (e.g. In case L1 don't
> +	 * intercept EXTERNAL_INTERRUPT).
> +	 *
> +	 * Usually this would be handled by L0 requesting a
> +	 * IRQ/NMI window by setting VMCS accordingly. However,
> +	 * this setting was done on VMCS01 and now VMCS02 is active
> +	 * instead. Thus, we force L0 to perform pending event
> +	 * evaluation by requesting a KVM_REQ_EVENT.
> +	 */
> +	if (vmcs01_cpu_exec_ctrl &
> +		(CPU_BASED_VIRTUAL_INTR_PENDING | CPU_BASED_VIRTUAL_NMI_PENDING)) {

Looks good, pending nested interrupts will be handled on the actual VM
entry, so we can ignore them here.

> +		kvm_make_request(KVM_REQ_EVENT, vcpu);
> +	}
> +
>  	/*
>  	 * Note no nested_vmx_succeed or nested_vmx_fail here. At this point
>  	 * we are no longer running L1, and VMLAUNCH/VMRESUME has not yet
> @@ -12702,7 +12724,8 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
>  	 * by event injection, halt vcpu.
>  	 */
>  	if ((vmcs12->guest_activity_state == GUEST_ACTIVITY_HLT) &&
> -	    !(vmcs12->vm_entry_intr_info_field & INTR_INFO_VALID_MASK)) {
> +	    !(vmcs12->vm_entry_intr_info_field & INTR_INFO_VALID_MASK) &&
> +	    !kvm_test_request(KVM_REQ_EVENT, vcpu)) {

What is the purpose of this check?  I think the event is recognized in
when checking for runnability and will resume the VCPU,

thanks.

>  		vmx->nested.nested_run_pending = 0;
>  		return kvm_vcpu_halt(vcpu);
>  	}
> -- 
> 2.16.1
> 




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux