On Wed, Apr 27, 2022 at 2:18 PM Peter Gonda <pgonda@xxxxxxxxxx> wrote: > > On Wed, Apr 27, 2022 at 10:04 AM Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote: > > > > On 4/26/22 21:06, Peter Gonda wrote: > > > On Thu, Apr 21, 2022 at 9:56 AM Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote: > > >> > > >> On 4/20/22 22:14, Peter Gonda wrote: > > >>>>>> svm_vm_migrate_from() uses sev_lock_vcpus_for_migration() to lock all > > >>>>>> source and target vcpu->locks. Mark the nested subclasses to avoid false > > >>>>>> positives from lockdep. > > >>>> Nope. Good catch, I didn't realize there was a limit 8 subclasses: > > >>> Does anyone have thoughts on how we can resolve this vCPU locking with > > >>> the 8 subclass max? > > >> > > >> The documentation does not have anything. Maybe you can call > > >> mutex_release manually (and mutex_acquire before unlocking). > > >> > > >> Paolo > > > > > > Hmm this seems to be working thanks Paolo. To lock I have been using: > > > > > > ... > > > if (mutex_lock_killable_nested( > > > &vcpu->mutex, i * SEV_NR_MIGRATION_ROLES + role)) > > > goto out_unlock; > > > mutex_release(&vcpu->mutex.dep_map, _THIS_IP_); > > > ... > > > > > > To unlock: > > > ... > > > mutex_acquire(&vcpu->mutex.dep_map, 0, 0, _THIS_IP_); > > > mutex_unlock(&vcpu->mutex); > > > ... > > > > > > If I understand correctly we are fully disabling lockdep by doing > > > this. If this is the case should I just remove all the '_nested' usage > > > so switch to mutex_lock_killable() and remove the per vCPU subclass? > > > > Yes, though you could also do: > > > > bool acquired = false; > > kvm_for_each_vcpu(...) { > > if (acquired) > > mutex_release(&vcpu->mutex.dep_map, _THIS_IP_); > > if (mutex_lock_killable_nested(&vcpu->mutex, role) > > goto out_unlock; > > acquired = true; > > ... > > > > and to unlock: > > > > bool acquired = true; > > kvm_for_each_vcpu(...) { > > if (!acquired) > > mutex_acquire(&vcpu->mutex.dep_map, 0, role, _THIS_IP_); > > mutex_unlock(&vcpu->mutex); > > acquired = false; > > } So when actually trying this out I noticed that we are releasing the current vcpu iterator but really we haven't actually taken that lock yet. So we'd need to maintain a prev_* pointer and release that one. That seems a bit more complicated than just doing this: To lock: bool acquired = false; kvm_for_each_vcpu(...) { if (!acquired) { if (mutex_lock_killable_nested(&vcpu->mutex, role) goto out_unlock; acquired = true; } else { if (mutex_lock_killable(&vcpu->mutex, role) goto out_unlock; } } To unlock: kvm_for_each_vcpu(...) { mutex_unlock(&vcpu->mutex); } This way instead of mocking and releasing the lock_dep we just lock the fist vcpu with mutex_lock_killable_nested(). I think this maintains the property you suggested of "coalesces all the mutexes for a vm in a single subclass". Thoughts? > > > > where role is either 0 or SINGLE_DEPTH_NESTING and is passed to > > sev_{,un}lock_vcpus_for_migration. > > > > That coalesces all the mutexes for a vm in a single subclass, essentially. > > Ah thats a great idea to allow for lockdep to work still. I'll try > that out, thanks again Paolo. > > > > > Paolo > >