On 5/10/22 02:20, Sean Christopherson wrote:
--
From: Sean Christopherson<seanjc@xxxxxxxxxx>
Date: Mon, 9 May 2022 17:13:39 -0700
Subject: [PATCH] KVM: x86/mmu: Return true from is_cr4_pae() iff CR0.PG is set
Condition is_cr4_pae() on is_cr0_pg() in addition to the !4-byte gPTE
check. From the MMU's perspective, PAE is disabling if paging is
disabled. The current code works because all callers check is_cr0_pg()
before invoking is_cr4_pae(), but relying on callers to maintain that
behavior is unnecessarily risky.
Fixes: faf729621c96 ("KVM: x86/mmu: remove redundant bits from extended role")
Signed-off-by: Sean Christopherson<seanjc@xxxxxxxxxx>
---
arch/x86/kvm/mmu/mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 909372762363..d1c20170a553 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -240,7 +240,7 @@ static inline bool is_cr0_pg(struct kvm_mmu *mmu)
static inline bool is_cr4_pae(struct kvm_mmu *mmu)
{
- return !mmu->cpu_role.base.has_4_byte_gpte;
+ return is_cr0_pg(mmu) && !mmu->cpu_role.base.has_4_byte_gpte;
}
static struct kvm_mmu_role_regs vcpu_to_role_regs(struct kvm_vcpu *vcpu)
Hmm, thinking more about it this is not needed for two kind of opposite
reasons:
* if is_cr4_pae() really were to represent the raw CR4.PAE value, this
is incorrect and it should be up to the callers to check is_cr0_pg()
* if is_cr4_pae() instead represents 8-byte page table entries, then it
does even before this patch, because of the following logic in
kvm_calc_cpu_role():
if (!____is_cr0_pg(regs)) {
role.base.direct = 1;
return role;
}
...
role.base.has_4_byte_gpte = !____is_cr4_pae(regs);
So whatever meaning we give to is_cr4_pae(), there is no need for the
adjustment.
Paolo