On Fri, Apr 29, 2022 at 11:21 AM Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote: > > On Fri, Apr 29, 2022 at 7:12 PM Peter Gonda <pgonda@xxxxxxxxxx> wrote: > > Sounds good. Instead of doing this prev_vcpu solution we could just > > keep the 1st vcpu for source and target. I think this could work since > > all the vcpu->mutex.dep_maps do not point to the same string. > > > > Lock: > > bool acquired = false; > > kvm_for_each_vcpu(...) { > > if (mutex_lock_killable_nested(&vcpu->mutex, role) > > goto out_unlock; > > acquired = true; > > if (acquired) > > mutex_release(&vcpu->mutex, role) > > } > > Almost: > > bool first = true; > kvm_for_each_vcpu(...) { > if (mutex_lock_killable_nested(&vcpu->mutex, role) > goto out_unlock; > if (first) > ++role, first = false; > else > mutex_release(&vcpu->mutex, role); > } > > and to unlock: > > bool first = true; > kvm_for_each_vcpu(...) { > if (first) > first = false; > else > mutex_acquire(&vcpu->mutex, role); > mutex_unlock(&vcpu->mutex); > acquired = false; > } > > because you cannot use the first vCPU's role again when locking. Ah yes I missed that. I would suggest `role = SEV_NR_MIGRATION_ROLES` or something else instead of role++ to avoid leaking this implementation detail outside of the function signature / enum. > > Paolo >