On 19/10/20 18:52, Nadav Amit wrote: > IIRC, this test failed on VMware, and according to our previous discussions, > does not follow the SDM as NMIs might be collapsed [1]. > > [1] https://marc.info/?l=kvm&m=145876994031502&w=2 So should KVM be changed to always collapse NMIs, like this? diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 105261402921..4032336ecba3 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -668,7 +668,7 @@ EXPORT_SYMBOL_GPL(kvm_inject_emulated_page_fault); void kvm_inject_nmi(struct kvm_vcpu *vcpu) { - atomic_inc(&vcpu->arch.nmi_queued); + atomic_set(&vcpu->arch.nmi_queued, 1); kvm_make_request(KVM_REQ_NMI, vcpu); } EXPORT_SYMBOL_GPL(kvm_inject_nmi); @@ -8304,18 +8304,7 @@ static void inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit static void process_nmi(struct kvm_vcpu *vcpu) { - unsigned limit = 2; - - /* - * x86 is limited to one NMI running, and one NMI pending after it. - * If an NMI is already in progress, limit further NMIs to just one. - * Otherwise, allow two (and we'll inject the first one immediately). - */ - if (kvm_x86_ops.get_nmi_mask(vcpu) || vcpu->arch.nmi_injected) - limit = 1; - - vcpu->arch.nmi_pending += atomic_xchg(&vcpu->arch.nmi_queued, 0); - vcpu->arch.nmi_pending = min(vcpu->arch.nmi_pending, limit); + vcpu->arch.nmi_pending |= atomic_xchg(&vcpu->arch.nmi_queued, 0); kvm_make_request(KVM_REQ_EVENT, vcpu); } Paolo