On Thu, Mar 31, 2022, Sean Christopherson wrote: > Oof, loking at sync_page(), that's a bug in patch 1. make_spte() guards the call > to mark_page_dirty_in_slot() with kvm_slot_dirty_track_enabled(), which means it > won't honor the dirty quota unless dirty logging is enabled. Probably not an issue > for the intended use case, but it'll result in wrong stats, and technically the > dirty quota can be enabled without dirty logging being enabled. > > diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c > index 4739b53c9734..df0349be388b 100644 > --- a/arch/x86/kvm/mmu/spte.c > +++ b/arch/x86/kvm/mmu/spte.c > @@ -182,7 +182,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, > "spte = 0x%llx, level = %d, rsvd bits = 0x%llx", spte, level, > get_rsvd_bits(&vcpu->arch.mmu->shadow_zero_check, spte, level)); > > - if ((spte & PT_WRITABLE_MASK) && kvm_slot_dirty_track_enabled(slot)) { > + if (spte & PT_WRITABLE_MASK) { > /* Enforced by kvm_mmu_hugepage_adjust. */ > WARN_ON(level > PG_LEVEL_4K); > mark_page_dirty_in_slot(vcpu->kvm, slot, gfn); This pseudopatch is buggy, the WARN_ON() will obviously fire. Easiest thing would be to move the condition into the WARN_ON. WARN_ON(level > PG_LEVEL_4K && kvm_slot_dirty_track_enabled(slot)); That brings up another thing that's worth documenting: the dirty_count will be skewed based on the size of the pages accessed by each vCPU. I still think having the stat always count will be useful though.