Re: [PATCH v2] KVM: nVMX: Fix loss of pending IRQ/NMI before entering L2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sept. 20, 2018, Paolo Bonzini wrote:

> On 11/09/2018 18:50, Liran Alon wrote:
> 
> >
> >
> >> On 11 Sep 2018, at 16:16, Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote:
> >>
> >> On 30/08/2018 11:57, Liran Alon wrote:
> >>> Consider the case L1 had a IRQ/NMI event until it executed
> >>> VMLAUNCH/VMRESUME which wasn't delivered because it was disallowed
> >>> (e.g. interrupts disabled). When L1 executes VMLAUNCH/VMRESUME,
> >>> L0 needs to evaluate if this pending event should cause an exit from
> >>> L2 to L1 or delivered directly to L2 (e.g. In case L1 don't intercept
> >>> EXTERNAL_INTERRUPT).
> >>>
> >>> Usually this would be handled by L0 requesting a IRQ/NMI window
> >>> by setting VMCS accordingly. However, this setting was done on
> >>> VMCS01 and now VMCS02 is active instead. Thus, when L1 executes
> >>> VMLAUNCH/VMRESUME we force L0 to perform pending event evaluation by
> >>> requesting a KVM_REQ_EVENT.
> >>>
> >>> Note that above scenario exists when L1 KVM is about to enter L2 but
> >>> requests an "immediate-exit". As in this case, L1 will
> >>> disable-interrupts and then send a self-IPI before entering L2.
> >>>
> >>> Co-authored-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx>
> >>> Signed-off-by: Liran Alon <liran.alon@xxxxxxxxxx>
> >>> ---
> >>> arch/x86/kvm/vmx.c | 25 ++++++++++++++++++++++++-
> >>> 1 file changed, 24 insertions(+), 1 deletion(-)
> >>
> >> Any chance you can write a testcase for selftests/kvm?  The framework
> >> should be more or less stable by now.
> >>
> >> Paolo
> >
> > Actually, I have already written one and submitted it a week ago to the mailing list.
> > You can find the relevant unit-tests here:
> > https://patchwork.kernel.org/project/kvm/list/?series=14789
> 
> 
> The test doesn't pass with the new patch.
> 
> Also, the patch that was applied lacks the final hunk:
> 
> @@ -12702,7 +12724,8 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
>  	 * by event injection, halt vcpu.
>  	 */
>  	if ((vmcs12->guest_activity_state == GUEST_ACTIVITY_HLT) &&
> -	    !(vmcs12->vm_entry_intr_info_field & INTR_INFO_VALID_MASK)) {
> +	    !(vmcs12->vm_entry_intr_info_field & INTR_INFO_VALID_MASK) &&
> +	    !kvm_test_request(KVM_REQ_EVENT, vcpu)) {
>  		vmx->nested.nested_run_pending = 0;
>  		return kvm_vcpu_halt(vcpu);
>  	}
> 
> and the test doesn't pass even if I add this hunk.
> (The v1 patch works, but it is a much bigger hammer).  Anybody has time
> to take a look?

I tried to reproduce the failure - I ran both linked tests on Linux master
(a83f87c1d2a93) but they pass.

Can you post your failure log?

Upstream kernel on an Ubuntu 18.04 with QEMU 2.11.1. The tests I ran:
- vmx_pending_event_test
- vmx_pending_event_hlt_test

Nikita



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux