On Tue, Apr 14, 2020 at 04:11:07PM -0400, Cathy Avery wrote: > With NMI intercept moved to check_nested_events there is a race > condition where vcpu->arch.nmi_pending is set late causing How is nmi_pending set late? The KVM_{G,S}ET_VCPU_EVENTS paths can't set it because the current KVM_RUN thread holds the mutex, and the only other call to process_nmi() is in the request path of vcpu_enter_guest, which has already executed. > the execution of check_nested_events to not setup correctly > for nested.exit_required. A second call to check_nested_events > allows the injectable nmi to be detected in time in order to > require immediate exit from L2 to L1. > > Signed-off-by: Cathy Avery <cavery@xxxxxxxxxx> > --- > arch/x86/kvm/x86.c | 15 +++++++++++---- > 1 file changed, 11 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 027dfd278a97..ecfafcd93536 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -7734,10 +7734,17 @@ static int inject_pending_event(struct kvm_vcpu *vcpu) > vcpu->arch.smi_pending = false; > ++vcpu->arch.smi_count; > enter_smm(vcpu); > - } else if (vcpu->arch.nmi_pending && kvm_x86_ops.nmi_allowed(vcpu)) { > - --vcpu->arch.nmi_pending; > - vcpu->arch.nmi_injected = true; > - kvm_x86_ops.set_nmi(vcpu); > + } else if (vcpu->arch.nmi_pending) { > + if (is_guest_mode(vcpu) && kvm_x86_ops.check_nested_events) { > + r = kvm_x86_ops.check_nested_events(vcpu); > + if (r != 0) > + return r; > + } > + if (kvm_x86_ops.nmi_allowed(vcpu)) { > + --vcpu->arch.nmi_pending; > + vcpu->arch.nmi_injected = true; > + kvm_x86_ops.set_nmi(vcpu); > + } > } else if (kvm_cpu_has_injectable_intr(vcpu)) { > /* > * Because interrupts can be injected asynchronously, we are > -- > 2.20.1 >