The KVM MMIO support uses bit 51 as the reserved bit to cause nested page faults when a guest performs MMIO. The AMD memory encryption support uses a CPUID function to define the encryption bit position. Given this, it is possible that these bits can conflict. Use svm_hardware_setup() to override the MMIO mask if memory encryption support is enabled. When memory encryption support is enabled the physical address width is reduced and the first bit after the last valid reduced physical address bit will always be reserved. Use this bit as the MMIO mask. Fixes: 28a1f3ac1d0c ("kvm: x86: Set highest physical address bits in non-present/reserved SPTEs") Suggested-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> Signed-off-by: Tom Lendacky <thomas.lendacky@xxxxxxx> --- arch/x86/kvm/svm.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 122d4ce3b1ab..2cb834b5982a 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1361,6 +1361,32 @@ static __init int svm_hardware_setup(void) } } + /* + * The default MMIO mask is a single bit (excluding the present bit), + * which could conflict with the memory encryption bit. Check for + * memory encryption support and override the default MMIO masks if + * it is enabled. + */ + if (cpuid_eax(0x80000000) >= 0x8000001f) { + u64 msr, mask; + + rdmsrl(MSR_K8_SYSCFG, msr); + if (msr & MSR_K8_SYSCFG_MEM_ENCRYPT) { + /* + * The physical addressing width is reduced. The first + * bit above the new physical addressing limit will + * always be reserved. Use this bit and the present bit + * to generate a page fault with PFER.RSV = 1. + */ + mask = BIT_ULL(boot_cpu_data.x86_phys_bits); + mask |= BIT_ULL(0); + + kvm_mmu_set_mmio_spte_mask(mask, mask, + PT_WRITABLE_MASK | + PT_USER_MASK); + } + } + for_each_possible_cpu(cpu) { r = svm_cpu_init(cpu); if (r) -- 2.17.1