From: David Woodhouse <dwmw@xxxxxxxxxxxx> Lockdep is learning to spot deadlocks with sleepable RCU vs. mutexes, which can occur where one code path calls synchronize_scru() with a mutex held, while another code path attempts to obtain the same mutex while in a read-side section. Since lockdep isn't very good at reading the English prose in Documentation/virt/kvm/locking.rst, give it a demonstration by calling synchronize_scru(&kvm->srcu) while holding kvm->lock in kvm_create_vm(). The cases where this happens naturally are relatively esoteric and may not happen otherwise. Signed-off-by: David Woodhouse <dwmw@xxxxxxxxxxxx> --- virt/kvm/kvm_main.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 13e88297f999..285b3c5a6364 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1173,6 +1173,16 @@ static struct kvm *kvm_create_vm(unsigned long type, const char *fdname) if (init_srcu_struct(&kvm->irq_srcu)) goto out_err_no_irq_srcu; +#ifdef CONFIG_LOCKDEP + /* + * Ensure lockdep knows that it's not permitted to lock kvm->lock + * from a SRCU read section on kvm->srcu. + */ + mutex_lock(&kvm->lock); + synchronize_srcu(&kvm->srcu); + mutex_unlock(&kvm->lock); +#endif + refcount_set(&kvm->users_count, 1); for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { for (j = 0; j < 2; j++) { -- 2.35.3