On Fri, Jan 5, 2024 at 2:29 AM David Matlack <dmatlack@xxxxxxxxxx> wrote: > > On Wed, Jan 3, 2024 at 8:14 PM Liang Chen <liangchen.linux@xxxxxxxxx> wrote: > > > > On Wed, Jan 3, 2024 at 11:25 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > > > > > +David > > > > > > On Wed, Jan 03, 2024, Liang Chen wrote: > > > > Count the number of zapped pages of tdp_mmu for vm stat. > > > > > > Why? I don't necessarily disagree with the change, but it's also not obvious > > > that this information is all that useful for the TDP MMU, e.g. the pf_fixed/taken > > > stats largely capture the same information. > > > > > > > We are attempting to make zapping specific to a particular memory > > slot, something like below. > > > > void kvm_tdp_zap_pages_in_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) > > { > > struct kvm_mmu_page *root; > > bool shared = false; > > struct tdp_iter iter; > > > > gfn_t end = slot->base_gfn + slot->npages; > > gfn_t start = slot->base_gfn; > > > > write_lock(&kvm->mmu_lock); > > rcu_read_lock(); > > > > for_each_tdp_mmu_root_yield_safe(kvm, root, false) { > > > > for_each_tdp_pte_min_level(iter, root, > > root->role.level, start, end) { > > if (tdp_mmu_iter_cond_resched(kvm, &iter, false, false)) > > continue; > > > > if (!is_shadow_present_pte(iter.old_spte)) > > continue; > > > > tdp_mmu_set_spte(kvm, &iter, 0); > > } > > } > > > > kvm_flush_remote_tlbs(kvm); > > > > rcu_read_unlock(); > > write_unlock(&kvm->mmu_lock); > > } > > > > I noticed that it was previously done to the legacy MMU, but > > encountered some subtle issues with VFIO. I'm not sure if the issue is > > still there with TDP_MMU. So we are trying to do more tests and > > analyses before submitting a patch. This provides me a convenient way > > to observe the number of pages being zapped. > > Note you could also use the existing tracepoint to observe the number > of pages being zapped in a given test run. e.g. > > perf stat -e kvmmmu:kvm_mmu_prepare_zap_page -- <cmd> Sure. Thank you!