Re: [PATCH v10 3/7] KVM: Support dirty ring in conjunction with bitmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 10, 2022, Gavin Shan wrote:
> @@ -3305,7 +3305,10 @@ void mark_page_dirty_in_slot(struct kvm *kvm,
>  	struct kvm_vcpu *vcpu = kvm_get_running_vcpu();
>  
>  #ifdef CONFIG_HAVE_KVM_DIRTY_RING
> -	if (WARN_ON_ONCE(!vcpu) || WARN_ON_ONCE(vcpu->kvm != kvm))
> +	if (WARN_ON_ONCE(vcpu && vcpu->kvm != kvm))
> +		return;
> +
> +	if (WARN_ON_ONCE(!kvm_arch_allow_write_without_running_vcpu(kvm) && !vcpu))

Nit, the !vcpu check should come first.  99.9% of the time @vcpu will be non-NULL,
in which case the call to kvm_arch_allow_write_without_running_vcpu() can be
avoided.  And checking for !vcpu should also be cheaper than said call.

Since the below code is gaining a check on "vcpu" when using the dirty ring, and
needs to gain a check on memslot->dirty_bitmap, I think it makes sense to let KVM
continue on instead of bailing.  I.e. fill the dirty bitmap if possible instead
of dropping the dirty info and potentiall corrupting guest memory.

The "vcpu->kvm != kvm" check is different; if that fails, KVM would potentially
log the dirty page into the wrong VM's context, which could be fatal to one or
both VMs.

If it's not too late to rewrite history, this could even be done as a prep patch,
e.g. with a changelog explaning that KVM should try to log to the dirty bitmap
even if KVM has a bug where a write to guest memory was triggered without a running
vCPU.

---
 virt/kvm/kvm_main.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 43bbe4fde078..b1115bbd6038 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3313,18 +3313,20 @@ void mark_page_dirty_in_slot(struct kvm *kvm,
 	struct kvm_vcpu *vcpu = kvm_get_running_vcpu();
 
 #ifdef CONFIG_HAVE_KVM_DIRTY_RING
-	if (WARN_ON_ONCE(!vcpu) || WARN_ON_ONCE(vcpu->kvm != kvm))
+	if (WARN_ON_ONCE(vcpu && vcpu->kvm != kvm))
 		return;
+
+	WARN_ON_ONCE(!vcpu);
 #endif
 
 	if (memslot && kvm_slot_dirty_track_enabled(memslot)) {
 		unsigned long rel_gfn = gfn - memslot->base_gfn;
 		u32 slot = (memslot->as_id << 16) | memslot->id;
 
-		if (kvm->dirty_ring_size)
+		if (kvm->dirty_ring_size && vcpu)
 			kvm_dirty_ring_push(&vcpu->dirty_ring,
 					    slot, rel_gfn);
-		else
+		else if (memslot->dirty_bitmap)
 			set_bit_le(rel_gfn, memslot->dirty_bitmap);
 	}
 }

base-commit: 01b4689ee519329ce5f50ae02692e8acdaba0beb
-- 



>  		return;
>  #endif
>  
> @@ -3313,7 +3316,7 @@ void mark_page_dirty_in_slot(struct kvm *kvm,
>  		unsigned long rel_gfn = gfn - memslot->base_gfn;
>  		u32 slot = (memslot->as_id << 16) | memslot->id;
>  
> -		if (kvm->dirty_ring_size)
> +		if (kvm->dirty_ring_size && vcpu)
>  			kvm_dirty_ring_push(vcpu, slot, rel_gfn);
>  		else
>  			set_bit_le(rel_gfn, memslot->dirty_bitmap);

Not checking memslot->dirty_bitmap will allow a malicious userspace to induce a
NULL pointer dereference, e.g. enable dirty ring without the bitmap and save the
ITS tables.  The KVM_DEV_ARM_ITS_SAVE_TABLES path in patch 4 doesn't check that
kvm_use_dirty_bitmap() is true.

If requiring kvm_use_dirty_bitmap() to do KVM_DEV_ARM_ITS_SAVE_TABLES is deemed
to prescriptive, the best this code can do is:

		if (kvm->dirty_ring_size && vcpu)
			kvm_dirty_ring_push(&vcpu->dirty_ring,
					    slot, rel_gfn);
		else if (memslot->dirty_bitmap)
			set_bit_le(rel_gfn, memslot->dirty_bitmap);

If ARM rejects KVM_DEV_ARM_ITS_SAVE_TABLES, then this can be:

		if (kvm->dirty_ring_size && vcpu)
			kvm_dirty_ring_push(&vcpu->dirty_ring,
					    slot, rel_gfn);
		else if (!WARN_ON_ONCE(!memslot->dirty_bitmap))
			set_bit_le(rel_gfn, memslot->dirty_bitmap);



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux