On 27/02/23 7:19 am, Marc Zyngier wrote:
On Sat, 25 Feb 2023 20:48:01 +0000,
Shivam Kumar <shivam.kumar1@xxxxxxxxxxx> wrote:
Call update_dirty_quota whenever a page is marked dirty with
appropriate arch-specific page size. Process the KVM request
KVM_REQ_DIRTY_QUOTA_EXIT (raised by update_dirty_quota) to exit to
userspace with exit reason KVM_EXIT_DIRTY_QUOTA_EXHAUSTED.
Suggested-by: Shaju Abraham <shaju.abraham@xxxxxxxxxxx>
Suggested-by: Manish Mishra <manish.mishra@xxxxxxxxxxx>
Co-developed-by: Anurag Madnawat <anurag.madnawat@xxxxxxxxxxx>
Signed-off-by: Anurag Madnawat <anurag.madnawat@xxxxxxxxxxx>
Signed-off-by: Shivam Kumar <shivam.kumar1@xxxxxxxxxxx>
---
arch/arm64/kvm/Kconfig | 1 +
arch/arm64/kvm/arm.c | 7 +++++++
arch/arm64/kvm/mmu.c | 3 +++
3 files changed, 11 insertions(+)
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index ca6eadeb7d1a..8e7dea2c3a9f 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -44,6 +44,7 @@ menuconfig KVM
select SCHED_INFO
select GUEST_PERF_EVENTS if PERF_EVENTS
select INTERVAL_TREE
+ select HAVE_KVM_DIRTY_QUOTA
So this is selected unconditionally...
help
Support hosting virtualized guest machines.
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 3bd732eaf087..5162b2fc46a1 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -757,6 +757,13 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu)
if (kvm_dirty_ring_check_request(vcpu))
return 0;
+
+#ifdef CONFIG_HAVE_KVM_DIRTY_QUOTA
... and yet you litter the arch code with #ifdefs...
Sorry about that. #ifdefs are not required here.
+ if (kvm_check_request(KVM_REQ_DIRTY_QUOTA_EXIT, vcpu)) {
+ vcpu->run->exit_reason = KVM_EXIT_DIRTY_QUOTA_EXHAUSTED;
+ return 0;
What rechecks the quota on entry?
Right now, we are not rechecking the quota after entry. So, if the
userspace doesn't update the quota, then we let the vcpu run until it
tries to dirty again.
I think it's a good idea to check the quota on entry and keep exiting to
userspace until the quota is a positive value. Can add this in the next
patchset.
Thanks.
+ }
+#endif
}
return 1;
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 7113587222ff..baf416046f46 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1390,6 +1390,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
/* Mark the page dirty only if the fault is handled successfully */
if (writable && !ret) {
kvm_set_pfn_dirty(pfn);
+#ifdef CONFIG_HAVE_KVM_DIRTY_QUOTA
+ update_dirty_quota(kvm, fault_granule);
fault_granule isn't necessarily the amount that gets dirtied.
M.
For most of the paths where we are updating the quota, we cannot track
(or precisely account for) dirtying at a granularity less than the
minimum page size. Looking forward to your thoughts on what we can do
better here. Thanks.
Thanks,
Shivam