On Wed, Jul 27, 2022, Yan Zhao wrote: > On Sat, Jul 23, 2022 at 01:23:23AM +0000, Sean Christopherson wrote: > > <snip> > > > @@ -386,16 +385,18 @@ static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn, > > static void tdp_mmu_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, > > bool shared) > > { > > + atomic64_dec(&kvm->arch.tdp_mmu_pages); > > + > > + if (!sp->nx_huge_page_disallowed) > > + return; > > + > Does this read of sp->nx_huge_page_disallowed also need to be protected by > tdp_mmu_pages_lock in shared path? No, because only one CPU can call tdp_mmu_unlink_sp() for a shadow page. E.g. in a shared walk, the SPTE is zapped atomically and only the CPU that "wins" gets to unlink the s[. The extra lock is needed to prevent list corruption, but the sp itself is thread safe. FWIW, even if that guarantee didn't hold, checking the flag outside of tdp_mmu_pages_lock is safe because false positives are ok. untrack_possible_nx_huge_page() checks that the shadow page is actually on the list, i.e. it's a nop if a different task unlinks the page first. False negatives need to be avoided, but nx_huge_page_disallowed is cleared only when untrack_possible_nx_huge_page() is guaranteed to be called, i.e. true false negatives can't occur. Hmm, but I think there's a missing smp_rmb(), which is needed to ensure nx_huge_page_disallowed is read after observing the shadow-present SPTE (that's being unlinked). I'll add that in the next version.