On Wed, 3 Jan 2024 at 13:24, Prasad Pandit <ppandit@xxxxxxxxxx> wrote: > kvm_vcpu_ioctl_x86_set_vcpu_events() routine makes 'KVM_REQ_NMI' > request for a vcpu even when its 'events->nmi.pending' is zero. > Ex: > qemu_thread_start > kvm_vcpu_thread_fn > qemu_wait_io_event > qemu_wait_io_event_common > process_queued_cpu_work > do_kvm_cpu_synchronize_post_init/_reset > kvm_arch_put_registers > kvm_put_vcpu_events (cpu, level=[2|3]) > > This leads vCPU threads in QEMU to constantly acquire & release the > global mutex lock, delaying the guest boot due to lock contention. > Add check to make KVM_REQ_NMI request only if vcpu has NMI pending. > > Fixes: bdedff263132 ("KVM: x86: Route pending NMIs from userspace through process_nmi()") > Signed-off-by: Prasad Pandit <pjp@xxxxxxxxxxxxxxxxx> > --- > arch/x86/kvm/x86.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 1a3aaa7dafae..468870450b8b 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -5405,7 +5405,8 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu, > if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING) { > vcpu->arch.nmi_pending = 0; > atomic_set(&vcpu->arch.nmi_queued, events->nmi.pending); > - kvm_make_request(KVM_REQ_NMI, vcpu); > + if (events->nmi.pending) > + kvm_make_request(KVM_REQ_NMI, vcpu); > } > static_call(kvm_x86_set_nmi_mask)(vcpu, events->nmi.masked); > -- > 2.43.0 Ping...! --- - Prasad