On 29/04/21 02:40, Sean Christopherson wrote:
On Thu, Apr 29, 2021, Paolo Bonzini wrote:
it's not ugly and it's still relatively easy to explain.
LOL, that's debatable.
From your remark below it looks like we have different priorities on
what to avoid modifying.
I like the locks to be either very coarse or fine-grained enough for
them to be leaves, as I find that to be the easiest way to avoid
deadlocks and complex hierarchies. For this reason, I treat unlocking
in the middle of a large critical section as "scary by default"; you
have to worry about which invariants might be required (in the case of
RCU, which pointers might be stored somewhere and would be invalidated),
and which locks are taken at that point so that the subsequent relocking
would change the lock order from AB to BA.
This applies to every path leading to the unlock/relock. So instead
what matters IMO is shielding architecture code from the races that Ben
had to point out to me, _and the possibility to apply easily explained
rules_ outside more complex core code.
So, well, "relatively easy" because it's indeed subtle. But if you
consider what the locking rules are, "you can choose to protect
slots->arch data with this mutex and it will have no problematic
interactions with the memslot copy/update code" is as simple as it can get.
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 2799c6660cce..48929dd5fb29 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1377,16 +1374,17 @@ static int kvm_set_memslot(struct kvm *kvm,
goto out_slots;
update_memslots(slots, new, change);
- slots = install_new_memslots(kvm, as_id, slots);
+ install_new_memslots(kvm, as_id, slots);
kvm_arch_commit_memory_region(kvm, mem, old, new, change);
-
- kvfree(slots);
return 0;
out_slots:
- if (change == KVM_MR_DELETE || change == KVM_MR_MOVE)
+ if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) {
+ slot = id_to_memslot(slots, old->id);
+ slot->flags &= ~KVM_MEMSLOT_INVALID;
Modifying flags on an SRCU-protect field outside of said protection is sketchy.
It's probably ok to do this prior to the generation update, emphasis on
"probably". Of course, the VM is also likely about to be killed in this case...
slots = install_new_memslots(kvm, as_id, slots);
This will explode if memory allocation for KVM_MR_MOVE fails. In that case,
the rmaps for "slots" will have been cleared by kvm_alloc_memslot_metadata().
I take your subsequent reply as a sort-of-review that the above approach
works, though we may disagree on its elegance and complexity.
Paolo
The SRCU index is already tracked in vcpu->srcu_idx, why not temporarily drop
the SRCU lock if activate_shadow_mmu() needs to do work so that it can take
slots_lock? That seems simpler and I think would avoid modifying the common
memslot code.
kvm_arch_async_page_ready() is the only path for reaching kvm_mmu_reload() that
looks scary, but that should be impossible to reach with the correct MMU context.
We could always and an explicit sanity check on the rmaps being avaiable.