On Tue, Jan 10, 2023, David Woodhouse wrote: > On Tue, 2023-01-10 at 15:10 +0100, Paolo Bonzini wrote: > > On 1/10/23 13:55, David Woodhouse wrote: > > > > However, I > > > > completely forgot the sev_lock_vcpus_for_migration case, which is the > > > > exception that... well, disproves the rule. > > > > > > > But because it's an exception and rarely happens in practice, lockdep > > > didn't notice and keep me honest sooner? Can we take them in that order > > > just for fun at startup, to make sure lockdep knows? > > > > Sure, why not. Out of curiosity, is this kind of "priming" a thing > > elsewhere in the kernel > > I did this: > > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -461,6 +461,11 @@ void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc) > static void kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id) > { > mutex_init(&vcpu->mutex); > + > + /* Ensure that lockdep knows vcpu->mutex is taken *inside* kvm->lock */ > + mutex_lock(&vcpu->mutex); > + mutex_unlock(&vcpu->mutex); No idea about the splat below, but kvm_vcpu_init() doesn't run under kvm->lock, so I wouldn't expect this to do anything.