On Fri, 05 Nov 2021 20:12:12 +0000, Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > On Fri, Nov 05, 2021, Marc Zyngier wrote: > > All architectures have similar loops iterating over the vcpus, > > freeing one vcpu at a time, and eventually wiping the reference > > off the vcpus array. They are also inconsistently taking > > the kvm->lock mutex when wiping the references from the array. > > ... > > > +void kvm_destroy_vcpus(struct kvm *kvm) > > +{ > > + unsigned int i; > > + struct kvm_vcpu *vcpu; > > + > > + kvm_for_each_vcpu(i, vcpu, kvm) > > + kvm_vcpu_destroy(vcpu); > > + > > + mutex_lock(&kvm->lock); > > But why is kvm->lock taken here? Unless I'm overlooking an arch, > everyone calls this from kvm_arch_destroy_vm(), in which case this > is the only remaining reference to @kvm. And if there's some magic > path for which that's not true, I don't see how it can possibly be > safe to call kvm_vcpu_destroy() without holding kvm->lock, or how > this would guarantee that all vCPUs have actually been destroyed > before nullifying the array. I asked myself the same question two years ago, and couldn't really understand the requirement. However, x86 does just that, so I preserved the behaviour. If you too believe that this is just wrong, I'm happy to drop the locking altogether. If that breaks someone's flow, they'll shout soon enough. Thanks, M. -- Without deviation from the norm, progress is not possible.