On Wed, Aug 14, 2024, Paolo Bonzini wrote: > On 6/8/24 02:06, Sean Christopherson wrote: > > Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock > > on x86 due to a chain of locks and SRCU synchronizations. Translating the > > below lockdep splat, CPU1 #6 will wait on CPU0 #1, CPU0 #8 will wait on > > CPU2 #3, and CPU2 #7 will wait on CPU1 #4 (if there's a writer, due to the > > fairness of r/w semaphores). > > > > CPU0 CPU1 CPU2 > > 1 lock(&kvm->slots_lock); > > 2 lock(&vcpu->mutex); > > 3 lock(&kvm->srcu); > > 4 lock(cpu_hotplug_lock); > > 5 lock(kvm_lock); > > 6 lock(&kvm->slots_lock); > > 7 lock(cpu_hotplug_lock); > > 8 sync(&kvm->srcu); > > > > Note, there are likely more potential deadlocks in KVM x86, e.g. the same > > pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with > > __kvmclock_cpufreq_notifier() > > Offhand I couldn't see any places where {,__}cpufreq_driver_target() is > called within cpus_read_lock(). I didn't look too closely though. Aha! I think I finally found it and it's rather obvious now that I've found it. I looked quite deeply on multiple occasions in the past and never found such a case, but I could've sworn someone (Kai?) report a lockdep splat related to the cpufreq stuff when I did the big generic hardware enabling a while back. Of course, I couldn't find that either :-) Anyways... cpuhp_cpufreq_online() | -> cpufreq_online() | -> cpufreq_gov_performance_limits() | -> __cpufreq_driver_target() | -> __target_index() > > > +``kvm_usage_count`` > > +^^^^^^^^^^^^^^^^^^^ > > ``kvm_usage_lock`` Good job me.