Re: [PATCH v4 2/4] KVM: arm64: Dirty quota-based throttling of vcpus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 24/05/22 12:44 pm, Marc Zyngier wrote:
On Sat, 21 May 2022 21:29:38 +0100,
Shivam Kumar <shivam.kumar1@xxxxxxxxxxx> wrote:
Exit to userspace whenever the dirty quota is exhausted (i.e. dirty count
equals/exceeds dirty quota) to request more dirty quota.

Suggested-by: Shaju Abraham <shaju.abraham@xxxxxxxxxxx>
Suggested-by: Manish Mishra <manish.mishra@xxxxxxxxxxx>
Co-developed-by: Anurag Madnawat <anurag.madnawat@xxxxxxxxxxx>
Signed-off-by: Anurag Madnawat <anurag.madnawat@xxxxxxxxxxx>
Signed-off-by: Shivam Kumar <shivam.kumar1@xxxxxxxxxxx>
---
  arch/arm64/kvm/arm.c | 3 +++
  1 file changed, 3 insertions(+)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index ecc5958e27fe..5b6a239b83a5 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -848,6 +848,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
  	ret = 1;
  	run->exit_reason = KVM_EXIT_UNKNOWN;
  	while (ret > 0) {
+		ret = kvm_vcpu_check_dirty_quota(vcpu);
+		if (!ret)
+			break;
  		/*
  		 * Check conditions before entering the guest
  		 */
Why do we need yet another check on the fast path? It seems to me that
this is what requests are for, so I'm definitely not keen on this
approach. I certainly do not want any extra overhead for something
that is only used on migration. If anything, it is the migration path
that should incur the overhead.

	M.
I'll try implementing this with requests. Thanks.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux