From: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> Explicitly check for an MMIO spte in the fast page fault flow. TDX will use a not-present entry for MMIO sptes, which can be mistaken for an access-tracked spte since both have SPTE_SPECIAL_MASK set. MMIO sptes are handled in handle_mmio_page_fault for non-TDX VMs, so this patch does not affect them. TDX will handle MMIO emulation through a hypercall instead. Signed-off-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> Signed-off-by: Isaku Yamahata <isaku.yamahata@xxxxxxxxx> --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 17252f39bd7c..51306b80f47c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3163,7 +3163,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) else sptep = fast_pf_get_last_sptep(vcpu, fault->addr, &spte); - if (!is_shadow_present_pte(spte)) + if (!is_shadow_present_pte(spte) || is_mmio_spte(spte)) break; sp = sptep_to_sp(sptep); -- 2.25.1