On Wed, Sep 29, 2021, David Stevens wrote: > From: David Stevens <stevensd@xxxxxxxxxxxx> > > Remove two warnings that require ref counts for pages to be non-zero, as > mapped pfns from follow_pfn may not have an initialized ref count. > > Signed-off-by: David Stevens <stevensd@xxxxxxxxxxxx> > --- > arch/x86/kvm/mmu/mmu.c | 7 ------- > virt/kvm/kvm_main.c | 2 +- > 2 files changed, 1 insertion(+), 8 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 5a1adcc9cfbc..3b469df63bcf 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -617,13 +617,6 @@ static int mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) > > pfn = spte_to_pfn(old_spte); > > - /* > - * KVM does not hold the refcount of the page used by > - * kvm mmu, before reclaiming the page, we should > - * unmap it from mmu first. > - */ > - WARN_ON(!kvm_is_reserved_pfn(pfn) && !page_count(pfn_to_page(pfn))); Have you actually observed false positives with this WARN? I would expect anything without a struct page to get filtered out by !kvm_is_reserved_pfn(pfn). If you have observed false positives, I would strongly prefer we find a way to keep the page_count() sanity check, it has proven very helpful in the past in finding/debugging bugs during MMU development. > - > if (is_accessed_spte(old_spte)) > kvm_set_pfn_accessed(pfn); >