On Wed, Apr 22, 2020 at 02:06:49PM -0700, Sean Christopherson wrote: > On Mon, Apr 13, 2020 at 05:09:45PM -0700, Jim Mattson wrote: > > Fixes: f4124500c2c13 ("KVM: nVMX: Fully emulate preemption timer") > > Signed-off-by: Jim Mattson <jmattson@xxxxxxxxxx> > > Reviewed-by: Oliver Upton <oupton@xxxxxxxxxx> > > Reviewed-by: Peter Shier <pshier@xxxxxxxxxx> > > ... > > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > > index 83050977490c..aae01253bfba 100644 > > --- a/arch/x86/kvm/vmx/vmx.c > > +++ b/arch/x86/kvm/vmx/vmx.c > > @@ -4682,7 +4682,7 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu) > > if (is_icebp(intr_info)) > > WARN_ON(!skip_emulated_instruction(vcpu)); > > > > - kvm_queue_exception(vcpu, DB_VECTOR); > > + kvm_requeue_exception(vcpu, DB_VECTOR); > > This isn't wrong per se, but it's effectively papering over an underlying > bug, e.g. the same missed preemption timer bug can manifest if the timer > expires while in KVM context (because the hr timer is left running) and KVM > queues an exception for _any_ reason. I just reread your changelog and realized this patch was intended to fix a different symptom than what I observed, i.e. the above probably doesn't make a whole lot of sense. I just so happened that this change also resolved my "missing timer" bug because directly injecting the #DB would cause vmx_check_nested_events() to return -EBUSY on the preemption timer. That being said, I'm 99% certain that the behavior you observed is fixed by correctly handling priority of non-exiting events vs. exiting events, i.e. slightly different justification, same net result.