On Fri, Dec 20, 2024, Yan Zhao wrote: > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > > index 4508d868f1cd..2f15e0e33903 100644 > > --- a/arch/x86/kvm/mmu/tdp_mmu.c > > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > > @@ -985,6 +985,11 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, > > if (fault->prefetch && is_shadow_present_pte(iter->old_spte)) > > return RET_PF_SPURIOUS; > > > > + if (is_shadow_present_pte(iter->old_spte) && > > + is_access_allowed(fault, iter->old_spte) && > > + is_last_spte(iter->old_spte, iter->level)) > One nit: > Do we need to warn on pfn_changed? Hmm, I definitely don't think we "need" to, but it's not a bad idea. The shadow MMU kinda sorta WARNs on this scenario: if (!was_rmapped) { WARN_ON_ONCE(ret == RET_PF_SPURIOUS); rmap_add(vcpu, slot, sptep, gfn, pte_access); } My only hesitation in adding a WARN is that the fast page fault path has similar logic and doesn't WARN, but that's rather silly on my part because it ideally would WARN, but grabbing the PFN to WARN would make it not-fast :-) Want to post a patch? I don't really want to squeeze the WARN into 6.13, just in case there's some weird edge case we're forgetting.