Re: [PATCH v3 2/3] KVM: Documentation: Update kvm_run structure for dirty quota

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 31, 2022, Shivam Kumar wrote:
> 
> On 31/03/22 6:10 am, Sean Christopherson wrote:
> > On Sun, Mar 06, 2022, Shivam Kumar wrote:
> > > Update the kvm_run structure with a brief description of dirty
> > > quota members and how dirty quota throttling works.
> > This should be squashed with patch 1.  I actually had to look ahead to this patch
> > because I forgot the details since I last reviewed this :-)
> Ack. Thanks.
> > > +	__u64 dirty_quota;
> > > +Please note that this quota cannot be strictly enforced if PML is enabled, and
> > > +the VCPU may end up dirtying pages more than its quota. The difference however
> > > +is bounded by the PML buffer size.
> > If you want to be pedantic, I doubt KVM can strictly enforce the quota even if PML
> > is disabled.  E.g. I can all but guarantee that it's possible to dirty multiple
> > pages during a single exit.  Probably also worth spelling out PML and genericizing
> > things.  Maybe
> > 
> >    Please note that enforcing the quota is best effort, as the guest may dirty
> >    multiple pages before KVM can recheck the quota.  However, unless KVM is using
> >    a hardware-based dirty ring buffer, e.g. Intel's Page Modification Logging,
> >    KVM will detect quota exhaustion within a handful of dirtied page.  If a
> >    hardware ring buffer is used, the overrun is bounded by the size of the buffer
> >    (512 entries for PML).
> Thank you for the blurb. Looks good to me, though I'm curious about the exits
> that can dirty multiple pages.

Anything that touches multiple pages.  nested_mark_vmcs12_pages_dirty() is an
easy example.  Emulating L2 with nested TDP.  An emulated instruction that splits
a page.  I'm pretty sure FNAME(sync_page) could dirty an entire page worth of
SPTEs, and that's waaay too deep to bail from.

Oof, loking at sync_page(), that's a bug in patch 1.  make_spte() guards the call
to mark_page_dirty_in_slot() with kvm_slot_dirty_track_enabled(), which means it
won't honor the dirty quota unless dirty logging is enabled.  Probably not an issue
for the intended use case, but it'll result in wrong stats, and technically the
dirty quota can be enabled without dirty logging being enabled.

diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 4739b53c9734..df0349be388b 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -182,7 +182,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
                  "spte = 0x%llx, level = %d, rsvd bits = 0x%llx", spte, level,
                  get_rsvd_bits(&vcpu->arch.mmu->shadow_zero_check, spte, level));

-       if ((spte & PT_WRITABLE_MASK) && kvm_slot_dirty_track_enabled(slot)) {
+       if (spte & PT_WRITABLE_MASK) {
                /* Enforced by kvm_mmu_hugepage_adjust. */
                WARN_ON(level > PG_LEVEL_4K);
                mark_page_dirty_in_slot(vcpu->kvm, slot, gfn);


And thinking more about silly edge cases, VMX's big emulation loop for invalid
guest state when unrestricted guest is disabled should probably explicitly check
the dirty quota.  Again, I doubt it matters to anyone's use case, but it is treated
as a full run loop for things like pending signals, it'd be good to be consistent.

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 84a7500cd80c..5e1ae373634c 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5511,6 +5511,9 @@ static int handle_invalid_guest_state(struct kvm_vcpu *vcpu)
                 */
                if (__xfer_to_guest_mode_work_pending())
                        return 1;
+
+               if (!kvm_vcpu_check_dirty_quota(vcpu))
+                       return 0;
        }

        return 1;



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux