> > > > +static int svm_sev_lock_for_migration(struct kvm *kvm) > > +{ > > + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; > > + int ret; > > + > > + /* > > + * Bail if this VM is already involved in a migration to avoid deadlock > > + * between two VMs trying to migrate to/from each other. > > + */ > > + spin_lock(&sev->migration_lock); > > + if (sev->migration_in_progress) > > + ret = -EBUSY; > > + else { > > + /* > > + * Otherwise indicate VM is migrating and take the KVM lock. > > + */ > > + sev->migration_in_progress = true; > > + mutex_lock(&kvm->lock); > > + ret = 0; > > + } > > + spin_unlock(&sev->migration_lock); > > + > > + return ret; > > +} > > + > > +static void svm_unlock_after_migration(struct kvm *kvm) > > +{ > > + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; > > + > > + mutex_unlock(&kvm->lock); > > + WRITE_ONCE(sev->migration_in_progress, false); > > +} > > + > > This entire locking scheme seems over-complicated to me. Can we simply > rely on `migration_lock` and get rid of `migration_in_progress`? I was > chatting about these patches with Peter, while he worked on this new > version. But he mentioned that this locking scheme had been suggested > by Sean in a previous review. Sean: what do you think? My rationale > was that this is called via a VM-level ioctl. So serializing the > entire code path on `migration_lock` seems fine. But maybe I'm missing > something? > Marc I think that only having the spin lock could result in deadlocking. If userspace double migrated 2 VMs, A and B for discussion, A could grab VM_A.spin_lock then VM_A.kvm_mutex. Meanwhile B could grab VM_B.spin_lock and VM_B.kvm_mutex. Then A attempts to grab VM_B.spin_lock and we have a deadlock. If the same happens with the proposed scheme when A attempts to lock B, VM_B.spin_lock will be open but the bool will mark the VM under migration so A will unlock and bail. Sean originally proposed a global spin lock but I thought a per kvm_sev_info struct would also be safe.