On Tue, Jan 30, 2024 at 05:13:00PM -0800, Sean Christopherson wrote: > On Mon, Oct 16, 2023, Michael Roth wrote: > > For KVM_X86_SNP_VM, only the PFERR_GUEST_ENC_MASK flag is needed to > > determine with an #NPF is due to a private/shared access by the guest. > > Implement that handling here. Also add handling needed to deal with > > SNP guests which in some cases will make MMIO accesses with the > > encryption bit. > > ... > > > @@ -4356,12 +4357,19 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault > > return RET_PF_EMULATE; > > } > > > > - if (fault->is_private != kvm_mem_is_private(vcpu->kvm, fault->gfn)) { > > + /* > > + * In some cases SNP guests will make MMIO accesses with the encryption > > + * bit set. Handle these via the normal MMIO fault path. > > + */ > > + if (!slot && private_fault && kvm_is_vm_type(vcpu->kvm, KVM_X86_SNP_VM)) > > + private_fault = false; > > Why? This is inarguably a guest bug. AFAICT this isn't explicitly disallowed by the SNP spec. There was however a set of security mitigations for SEV-ES that resulted in this being behavior being highly discouraged in linux guest code: https://lkml.org/lkml/2020/10/20/464 as well as OVMF guest code: https://edk2.groups.io/g/devel/message/69948 However the OVMF guest code still allows 1 exception for accesses to the local APIC base address, which is the only case I'm aware of that triggers this condition: https://github.com/tianocore/edk2/blob/master/OvmfPkg/Library/CcExitLib/CcExitVcHandler.c#L100 I think the rationale there is that if the guest absolutely *knows* that encrypted information is not stored at a particular MMIO address, then it can selectively choose to allow for exceptional cases like these. So KVM would need to allow for these cases in order to be fully compatible with existing SNP guests that do this. > > > + if (private_fault != kvm_mem_is_private(vcpu->kvm, fault->gfn)) { > > kvm_mmu_prepare_memory_fault_exit(vcpu, fault); > > return -EFAULT; > > } > > > > - if (fault->is_private) > > + if (private_fault) > > return kvm_faultin_pfn_private(vcpu, fault); > > > > async = false; > > diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h > > index 759c8b718201..e5b973051ad9 100644 > > --- a/arch/x86/kvm/mmu/mmu_internal.h > > +++ b/arch/x86/kvm/mmu/mmu_internal.h > > @@ -251,6 +251,24 @@ struct kvm_page_fault { > > > > int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); > > > > +static bool kvm_mmu_fault_is_private(struct kvm *kvm, gpa_t gpa, u64 err) > > +{ > > + bool private_fault = false; > > + > > + if (kvm_is_vm_type(kvm, KVM_X86_SNP_VM)) { > > + private_fault = !!(err & PFERR_GUEST_ENC_MASK); > > + } else if (kvm_is_vm_type(kvm, KVM_X86_SW_PROTECTED_VM)) { > > + /* > > + * This handling is for gmem self-tests and guests that treat > > + * userspace as the authority on whether a fault should be > > + * private or not. > > + */ > > + private_fault = kvm_mem_is_private(kvm, gpa >> PAGE_SHIFT); > > + } > > This can be more simply: > > if (kvm_is_vm_type(kvm, KVM_X86_SNP_VM)) > return !!(err & PFERR_GUEST_ENC_MASK); > > if (kvm_is_vm_type(kvm, KVM_X86_SW_PROTECTED_VM)) > return kvm_mem_is_private(kvm, gpa >> PAGE_SHIFT); > Yes, indeed. But TDX has taken a different approach for SW_PROTECTED_VM case where they do this check in kvm_mmu_page_fault() and then synthesize the PFERR_GUEST_ENC_MASK into error_code before calling kvm_mmu_do_page_fault(). It's not in the v18 patchset AFAICT, but it's in the tdx-upstream git branch that corresponds to it: https://github.com/intel/tdx/commit/3717a903ef453aa7b62e7eb65f230566b7f158d4 Would you prefer that SNP adopt the same approach? -Mike