s/Introduce/Use This doesn't "introduce" anything, in the sense that it's an AMD-defined error code flag. That matters because KVM *did* introduce/define PFERR_IMPLICIT_ACCESS. On Thu, Jul 20, 2023, isaku.yamahata@xxxxxxxxx wrote: > From: Isaku Yamahata <isaku.yamahata@xxxxxxxxx> > > Add two PFERR codes to designate that the page fault is private and that > it requires looking up memory attributes. The vendor kvm page fault > handler should set PFERR_GUEST_ENC_MASK bit based on their fault > information. It may or may not use the hardware value directly or > parse the hardware value to set the bit. > > For KVM_X86_PROTECTED_VM, ask memory attributes for the fault privateness. ... > +static inline bool kvm_is_fault_private(struct kvm *kvm, gpa_t gpa, u64 error_code) > +{ > + /* > + * This is racy with mmu_seq. If we hit a race, it would result in a > + * spurious KVM_EXIT_MEMORY_FAULT. > + */ > + if (kvm->arch.vm_type == KVM_X86_SW_PROTECTED_VM) > + return kvm_mem_is_private(kvm, gpa_to_gfn(gpa)); Please synthesize the error code flag for SW-protected VMs, same as TDX, e.g. diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 20e289e872eb..de9e0a9c41e6 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5751,6 +5751,10 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root.hpa))) return RET_PF_RETRY; + if (vcpu->kvm->arch.vm_type == KVM_X86_SW_PROTECTED_VM && + kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(cr2_or_gpa))) + error_code |= PFERR_GUEST_ENC_MASK; + r = RET_PF_INVALID; if (unlikely(error_code & PFERR_RSVD_MASK)) { r = handle_mmio_page_fault(vcpu, cr2_or_gpa, direct); Functionally it's the same, but I want all VM types to have the same source of truth for private versus shared, and I really don't want kvm_is_fault_private() to exist.