On Fri, Apr 29, 2022 at 9:38 AM Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote: > > On 4/29/22 17:35, Peter Gonda wrote: > > On Thu, Apr 28, 2022 at 5:59 PM Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote: > >> > >> On 4/28/22 23:28, Peter Gonda wrote: > >>> > >>> So when actually trying this out I noticed that we are releasing the > >>> current vcpu iterator but really we haven't actually taken that lock > >>> yet. So we'd need to maintain a prev_* pointer and release that one. > >> > >> Not entirely true because all vcpu->mutex.dep_maps will be for the same > >> lock. The dep_map is essentially a fancy string, in this case > >> "&vcpu->mutex". > >> > >> See the definition of mutex_init: > >> > >> #define mutex_init(mutex) \ > >> do { \ > >> static struct lock_class_key __key; \ > >> \ > >> __mutex_init((mutex), #mutex, &__key); \ > >> } while (0) > >> > >> and the dep_map field is initialized with > >> > >> lockdep_init_map_wait(&lock->dep_map, name, key, 0, LD_WAIT_SLEEP); > >> > >> (i.e. all vcpu->mutexes share the same name and key because they have a > >> single mutex_init-ialization site). Lockdep is as crude in theory as it > >> is effective in practice! > >> > >>> > >>> bool acquired = false; > >>> kvm_for_each_vcpu(...) { > >>> if (!acquired) { > >>> if (mutex_lock_killable_nested(&vcpu->mutex, role) > >>> goto out_unlock; > >>> acquired = true; > >>> } else { > >>> if (mutex_lock_killable(&vcpu->mutex, role) > >>> goto out_unlock; > >> > >> This will cause a lockdep splat because it uses subclass 0. All the > >> *_nested functions is allow you to specify a subclass other than zero. > > > > OK got it. I now have this to lock: > > > > kvm_for_each_vcpu (i, vcpu, kvm) { > > if (prev_vcpu != NULL) { > > mutex_release(&prev_vcpu->mutex.dep_map, _THIS_IP_); > > prev_vcpu = NULL; > > } > > > > if (mutex_lock_killable_nested(&vcpu->mutex, role)) > > goto out_unlock; > > prev_vcpu = vcpu; > > } > > > > But I've noticed the unlocking is in the wrong order since we are > > using kvm_for_each_vcpu() I think we need a kvm_for_each_vcpu_rev() or > > something. Which maybe a bit for work: > > https://elixir.bootlin.com/linux/latest/source/lib/xarray.c#L1119. > > No, you don't need any of this. You can rely on there being only one > depmap, otherwise you wouldn't need the mock releases and acquires at > all. Also the unlocking order does not matter for deadlocks, only the > locking order does. You're overdoing it. :) Hmm I'm slightly confused here then. If I take your original suggestion of: bool acquired = false; kvm_for_each_vcpu(...) { if (acquired) mutex_release(&vcpu->mutex.dep_map, _THIS_IP_); <-- Warning here if (mutex_lock_killable_nested(&vcpu->mutex, role) goto out_unlock; acquired = true; """ [ 2810.088982] ===================================== [ 2810.093687] WARNING: bad unlock balance detected! [ 2810.098388] 5.17.0-dbg-DEV #5 Tainted: G O [ 2810.103788] ------------------------------------- [ 2810.108490] sev_migrate_tes/107600 is trying to release lock (&vcpu->mutex) at: [ 2810.115798] [<ffffffffb7cd3592>] sev_lock_vcpus_for_migration+0xe2/0x1e0 [ 2810.122497] but there are no more locks to release! [ 2810.127376] other info that might help us debug this: [ 2810.133911] 3 locks held by sev_migrate_tes/107600: [ 2810.138791] #0: ffffa6cbf31ca3b8 (&kvm->lock){+.+.}-{3:3}, at: sev_vm_move_enc_context_from+0x96/0x690 [ 2810.148178] #1: ffffa6cbf28523b8 (&kvm->lock/1){+.+.}-{3:3}, at: sev_vm_move_enc_context_from+0xae/0x690 [ 2810.157738] #2: ffff9220683b01f8 (&vcpu->mutex){+.+.}-{3:3}, at: sev_lock_vcpus_for_migration+0x89/0x1e0 """ This makes sense to me given we are actually trying to release lock we haven't locked yet. So thats why I thought we'd need to work with a prev_vcpu pointer. So the behavior I've observed is slightly different than I'd expect from your statement "(i.e. all vcpu->mutexes share the same name and key because they have a single mutex_init-ialization site)." Ack about the unlocking order, that makes things easier. > > Paolo >