When shadowing EPT pages setup by L1 for a nested L2 guest the value of the PAE bit %cr4 is irrelevant. However, in the page role of a shadow page, cr4_pae basically means that the shadowed page uses 64-bit page table entries. When shadowing EPT page tables this is always the case. Thus set cr4_pae in this case. Similarly, calls to is_pae(vcpu) do not return useful information when shadowing EPT tables. With the change above we can check the cr4_pae bit in the current MMU's base_role instead. In most cases this is the same as is_pae() anyway. However, when shadowing EPT tables using is_pae() is wrong. Signed-off-by: Christian Ehrhardt <lk@xxxxxxx> --- arch/x86/kvm/mmu.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 51b953ad9d4e..01857e4cafee 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2180,7 +2180,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, static bool __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct list_head *invalid_list) { - if (sp->role.cr4_pae != !!is_pae(vcpu) + if (sp->role.cr4_pae != vcpu->arch.mmu.base_role.cr4_pae || vcpu->arch.mmu.sync_page(vcpu, sp) == 0) { kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list); return false; @@ -4838,6 +4838,7 @@ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty) role.direct = false; role.ad_disabled = !accessed_dirty; role.guest_mode = true; + role.cr4_pae = true; role.access = ACC_ALL; return role; @@ -5023,7 +5024,7 @@ static u64 mmu_pte_write_fetch_gpte(struct kvm_vcpu *vcpu, gpa_t *gpa, * as the current vcpu paging mode since we update the sptes only * when they have the same mode. */ - if (is_pae(vcpu) && *bytes == 4) { + if (vcpu->arch.mmu.base_role.cr4_pae && *bytes == 4) { /* Handle a 32-bit guest writing two halves of a 64-bit gpte */ *gpa &= ~(gpa_t)7; *bytes = 8; -- 2.17.1