On Tue, Feb 06, 2024, Dongli Zhang wrote: > Hi Prasad, > > On 1/2/24 23:53, Prasad Pandit wrote: > > From: Prasad Pandit <pjp@xxxxxxxxxxxxxxxxx> > > > > kvm_vcpu_ioctl_x86_set_vcpu_events() routine makes 'KVM_REQ_NMI' > > request for a vcpu even when its 'events->nmi.pending' is zero. > > Ex: > > qemu_thread_start > > kvm_vcpu_thread_fn > > qemu_wait_io_event > > qemu_wait_io_event_common > > process_queued_cpu_work > > do_kvm_cpu_synchronize_post_init/_reset > > kvm_arch_put_registers > > kvm_put_vcpu_events (cpu, level=[2|3]) > > > > This leads vCPU threads in QEMU to constantly acquire & release the > > global mutex lock, delaying the guest boot due to lock contention. > > Would you mind sharing where and how the lock contention is at QEMU space? That > is, how the QEMU mutex lock is impacted by KVM KVM_REQ_NMI? > > Or you meant line 3031 at QEMU side? Yeah, something like that. Details in this thread. https://lore.kernel.org/all/CAE8KmOyffXD4k69vRAFwesaqrBGzFY3i+kefbkHcQf4=jNYzOA@xxxxxxxxxxxxxx > 2858 int kvm_cpu_exec(CPUState *cpu) > 2859 { > 2860 struct kvm_run *run = cpu->kvm_run; > 2861 int ret, run_ret; > ... ... > 3023 default: > 3024 DPRINTF("kvm_arch_handle_exit\n"); > 3025 ret = kvm_arch_handle_exit(cpu, run); > 3026 break; > 3027 } > 3028 } while (ret == 0); > 3029 > 3030 cpu_exec_end(cpu); > 3031 qemu_mutex_lock_iothread(); > 3032 > 3033 if (ret < 0) { > 3034 cpu_dump_state(cpu, stderr, CPU_DUMP_CODE); > 3035 vm_stop(RUN_STATE_INTERNAL_ERROR); > 3036 } > 3037 > 3038 qatomic_set(&cpu->exit_request, 0); > 3039 return ret; > 3040 }