Re: [PATCH 4/6] Revert "KVM: Fix vcpu_array[0] races"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/9/24 17:04, Sean Christopherson wrote:
Now that KVM loads from vcpu_array if and only if the target index is
valid with respect to online_vcpus, i.e. now that it is safe to erase a
not-fully-onlined vCPU entry, revert to storing into vcpu_array before
success is guaranteed.

If xa_store() fails, which _should_ be impossible, then putting the vCPU's
reference to 'struct kvm' results in a refcounting bug as the vCPU fd has
been installed and owns the vCPU's reference.

This was found by inspection, but forcing the xa_store() to fail
confirms the problem:

  | Unable to handle kernel paging request at virtual address ffff800080ecd960
  | Call trace:
  |  _raw_spin_lock_irq+0x2c/0x70
  |  kvm_irqfd_release+0x24/0xa0
  |  kvm_vm_release+0x1c/0x38
  |  __fput+0x88/0x2ec
  |  ____fput+0x10/0x1c
  |  task_work_run+0xb0/0xd4
  |  do_exit+0x210/0x854
  |  do_group_exit+0x70/0x98
  |  get_signal+0x6b0/0x73c
  |  do_signal+0xa4/0x11e8
  |  do_notify_resume+0x60/0x12c
  |  el0_svc+0x64/0x68
  |  el0t_64_sync_handler+0x84/0xfc
  |  el0t_64_sync+0x190/0x194
  | Code: b9000909 d503201f 2a1f03e1 52800028 (88e17c08)

Practically speaking, this is a non-issue as xa_store() can't fail, absent
a nasty kernel bug.  But the code is visually jarring and technically
broken.

This reverts commit afb2acb2e3a32e4d56f7fbd819769b98ed1b7520.

Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx>
Cc: Michal Luczaj <mhal@xxxxxxx>
Cc: Alexander Potapenko <glider@xxxxxxxxxx>
Cc: Marc Zyngier <maz@xxxxxxxxxx>
Reported-by: Will Deacon <will@xxxxxxxxxx>
Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx>
---
  virt/kvm/kvm_main.c | 14 +++++---------
  1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index fca9f74e9544..f081839521ef 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -4283,7 +4283,8 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id)
  	}
vcpu->vcpu_idx = atomic_read(&kvm->online_vcpus);
-	r = xa_reserve(&kvm->vcpu_array, vcpu->vcpu_idx, GFP_KERNEL_ACCOUNT);
+	r = xa_insert(&kvm->vcpu_array, vcpu->vcpu_idx, vcpu, GFP_KERNEL_ACCOUNT);
+	BUG_ON(r == -EBUSY);
  	if (r)
  		goto unlock_vcpu_destroy;
@@ -4298,12 +4299,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id)
  	kvm_get_kvm(kvm);
  	r = create_vcpu_fd(vcpu);
  	if (r < 0)
-		goto kvm_put_xa_release;
-
-	if (KVM_BUG_ON(xa_store(&kvm->vcpu_array, vcpu->vcpu_idx, vcpu, 0), kvm)) {
-		r = -EINVAL;
-		goto kvm_put_xa_release;
-	}
+		goto kvm_put_xa_erase;

I also find it a bit jarring though that we have to undo the insertion. This is a chicken-and-egg situation where you are pick one operation B that will have to undo operation A if it fails. But what xa_store is doing, is breaking this deadlock.

The code is a bit longer, sure, but I don't see the point in complicating the vcpu_array invariants and letting an entry disappear.

The rest of the series is still good, of course.

Paolo

  	/*
  	 * Pairs with smp_rmb() in kvm_get_vcpu.  Store the vcpu
@@ -4318,10 +4314,10 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id)
  	kvm_create_vcpu_debugfs(vcpu);
  	return r;
-kvm_put_xa_release:
+kvm_put_xa_erase:
  	mutex_unlock(&vcpu->mutex);
  	kvm_put_kvm_no_destroy(kvm);
-	xa_release(&kvm->vcpu_array, vcpu->vcpu_idx);
+	xa_erase(&kvm->vcpu_array, vcpu->vcpu_idx);
  unlock_vcpu_destroy:
  	mutex_unlock(&kvm->lock);
  	kvm_dirty_ring_free(&vcpu->dirty_ring);





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux