On Mon, Jul 26, 2021 at 10:54 AM Mingwei Zhang <mizhang@xxxxxxxxxx> wrote: > > Factor in whether or not the old/new SPTEs are shadow-present when > adjusting the large page stats in the TDP MMU. A modified MMIO SPTE can > toggle the page size bit, as bit 7 is used to store the MMIO generation, > i.e. is_large_pte() can get a false positive when called on a MMIO SPTE. > Ditto for nuking SPTEs with REMOVED_SPTE, which sets bit 7 in its magic > value. > > Opportunistically move the logic below the check to verify at least one > of the old/new SPTEs is shadow present. > > Use is/was_leaf even though is/was_present would suffice. The code > generation is roughly equivalent since all flags need to be computed > prior to the code in question, and using the *_leaf flags will minimize > the diff in a future enhancement to account all pages, i.e. will change > the check to "is_leaf != was_leaf". > > Suggested-by: Sean Christopherson <seanjc@xxxxxxxxxx> > Signed-off-by: Mingwei Zhang <mizhang@xxxxxxxxxx> Reviewed-by: Ben Gardon <bgardon@xxxxxxxxxx> > --- > arch/x86/kvm/mmu/tdp_mmu.c | 20 +++++++++++++------- > 1 file changed, 13 insertions(+), 7 deletions(-) > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > index caac4ddb46df..cba2ab5db2a0 100644 > --- a/arch/x86/kvm/mmu/tdp_mmu.c > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > @@ -413,6 +413,7 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, > bool was_leaf = was_present && is_last_spte(old_spte, level); > bool is_leaf = is_present && is_last_spte(new_spte, level); > bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); > + bool was_large, is_large; > > WARN_ON(level > PT64_ROOT_MAX_LEVEL); > WARN_ON(level < PG_LEVEL_4K); > @@ -446,13 +447,6 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, > > trace_kvm_tdp_mmu_spte_changed(as_id, gfn, level, old_spte, new_spte); > > - if (is_large_pte(old_spte) != is_large_pte(new_spte)) { > - if (is_large_pte(old_spte)) > - atomic64_sub(1, (atomic64_t*)&kvm->stat.lpages); > - else > - atomic64_add(1, (atomic64_t*)&kvm->stat.lpages); > - } > - > /* > * The only times a SPTE should be changed from a non-present to > * non-present state is when an MMIO entry is installed/modified/ > @@ -478,6 +472,18 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, > return; > } > > + /* > + * Update large page stats if a large page is being zapped, created, or > + * is replacing an existing shadow page. > + */ > + was_large = was_leaf && is_large_pte(old_spte); > + is_large = is_leaf && is_large_pte(new_spte); > + if (was_large != is_large) { > + if (was_large) > + atomic64_sub(1, (atomic64_t *)&kvm->stat.lpages); > + else > + atomic64_add(1, (atomic64_t *)&kvm->stat.lpages); > + } > > if (was_leaf && is_dirty_spte(old_spte) && > (!is_present || !is_dirty_spte(new_spte) || pfn_changed)) > -- > 2.32.0.432.gabb21c7263-goog >