On Wed, Dec 14, 2022, Robert Hoo wrote: > On Tue, 2022-12-13 at 03:30 +0000, Sean Christopherson wrote: > > --- > > arch/x86/kvm/mmu/tdp_mmu.c | 3 ++- > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > > index e2e197d41780..fd4ae99790d7 100644 > > --- a/arch/x86/kvm/mmu/tdp_mmu.c > > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > > @@ -1203,7 +1203,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, > > struct kvm_page_fault *fault) > > if (fault->huge_page_disallowed && > > fault->req_level >= iter.level) { > > spin_lock(&kvm->arch.tdp_mmu_pages_lock); > > - track_possible_nx_huge_page(kvm, sp); > > + if (sp->nx_huge_page_disallowed) > > + track_possible_nx_huge_page(kvm, sp); > > spin_unlock(&kvm->arch.tdp_mmu_pages_lock); > > } > > } > > Is this possible? > The aforementioned situation happened, i.e. before above hunk > track_possible_nx_huge_page(), the sp is zapped by some other task, > tdp_mmu_unlink_sp() --> untrack_possible_nx_huge_page(kvm, sp): It's possible for untrack_possible_nx_huge_page() to be called before the above snippet, but the stat won't be decremented in that case since the page won't be on the list of possible NX huge pages, i.e. list_empty() will be true. void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp) { if (list_empty(&sp->possible_nx_huge_page_link)) return; --kvm->stat.nx_lpage_splits; And by not calling track_possible_nx_huge_page() (this bug fix), nx_lpage_splits won't be incorrectly incremented. > > --kvm->stat.nx_lpage_splits; > > But looks like the stat for this sp hasn't been increased yet. >