RE: [kvm-unit-tests PATCHv2] unittests.cfg: Increase timeout for apic test

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 20, 2020 at 10:48:24AM +0200, Paolo Bonzini wrote:
> On 19/10/20 18:52, Nadav Amit wrote:
> > IIRC, this test failed on VMware, and according to our previous discussions,
> > does not follow the SDM as NMIs might be collapsed [1].
> >
> > [1] https://marc.info/?l=kvm&m=145876994031502&w=2
>
> So should KVM be changed to always collapse NMIs, like this?

No, Nadav's failure is not on bare metal.  The test passes on bare metal.

Quoting myself:

  Architecturally I don't think there are any guarantees regarding
  simultaneous NMIs, but practically speaking the probability of NMIs
  being collapsed (on hardware) when NMIs aren't blocked is nil.  So while
  it may be architecturally legal for a VMM to drop an NMI in this case,
  it's reasonable for software to expect two NMIs to be received.


[*] https://lkml.kernel.org/r/A7453828-BD8E-43F8-B140-6D660535B7F2@xxxxxxxxx

> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 105261402921..4032336ecba3 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -668,7 +668,7 @@ EXPORT_SYMBOL_GPL(kvm_inject_emulated_page_fault);
>
>  void kvm_inject_nmi(struct kvm_vcpu *vcpu)
>  {
> -     atomic_inc(&vcpu->arch.nmi_queued);
> +     atomic_set(&vcpu->arch.nmi_queued, 1);
>       kvm_make_request(KVM_REQ_NMI, vcpu);
>  }
>  EXPORT_SYMBOL_GPL(kvm_inject_nmi);
> @@ -8304,18 +8304,7 @@ static void inject_pending_event(struct kvm_vcpu
> *vcpu, bool *req_immediate_exit
>
>  static void process_nmi(struct kvm_vcpu *vcpu)
>  {
> -     unsigned limit = 2;
> -
> -     /*
> -      * x86 is limited to one NMI running, and one NMI pending after it.
> -      * If an NMI is already in progress, limit further NMIs to just one.
> -      * Otherwise, allow two (and we'll inject the first one immediately).
> -      */
> -     if (kvm_x86_ops.get_nmi_mask(vcpu) || vcpu->arch.nmi_injected)
> -             limit = 1;
> -
> -     vcpu->arch.nmi_pending += atomic_xchg(&vcpu->arch.nmi_queued, 0);
> -     vcpu->arch.nmi_pending = min(vcpu->arch.nmi_pending, limit);
> +     vcpu->arch.nmi_pending |= atomic_xchg(&vcpu->arch.nmi_queued, 0);
>       kvm_make_request(KVM_REQ_EVENT, vcpu);
>  }




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux