Last June, Sean reported a possible deadlock scenario due to cpu_hotplug_lock() being taken unknowingly with other KVM locks taken. While he solved his immediate needs by inroducing kvm_usage_lock, the same possibility could exist elsewhere. The simplest sequence that I could concoct requires four CPUs and no constant TSC, so this is really theoretical, but since the fix is easy and can be documented, let's do it. At the time the suggested solution was to use RCU for vm_list, but that's not even necessary; it's enough to just keep the critical sections small, avoiding _any_ mutex_lock while holding kvm_lock. This is not hard to do because you can always drop kvm_lock in the middle of the vm_list walk if you first take a reference to the current struct kvm with kvm_get_kvm(); and anyway most walks of vm_list are already relatively small and only take spinlocks. The only case in which concurrent readers could be useful is really accessing statistics *from debugfs*, but then even an rwsem would do. RFC because it's compile-tested only. Paolo Paolo Bonzini (2): KVM: x86: fix usage of kvm_lock in set_nx_huge_pages() Documentation: explain issues with taking locks inside kvm_lock Documentation/virt/kvm/locking.rst | 27 ++++++++++++++++++++------- arch/x86/kvm/mmu/mmu.c | 13 +++++++------ 2 files changed, 27 insertions(+), 13 deletions(-) -- 2.43.5