On 08/07/20 00:36, Jim Mattson wrote: > According to the SDM, when PAE paging would be in use following a > MOV-to-CR0 that modifies any of CR0.CD, CR0.NW, or CR0.PG, then the > PDPTEs are loaded from the address in CR3. Previously, kvm only loaded > the PDPTEs when PAE paging would be in use following a MOV-to-CR0 that > modified CR0.PG. > > Signed-off-by: Jim Mattson <jmattson@xxxxxxxxxx> > Reviewed-by: Oliver Upton <oupton@xxxxxxxxxx> > Reviewed-by: Peter Shier <pshier@xxxxxxxxxx> > --- > arch/x86/kvm/x86.c | 9 +++++---- > 1 file changed, 5 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 88c593f83b28..5a91c975487d 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -775,6 +775,7 @@ EXPORT_SYMBOL_GPL(pdptrs_changed); > int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) > { > unsigned long old_cr0 = kvm_read_cr0(vcpu); > + unsigned long pdptr_bits = X86_CR0_CD | X86_CR0_NW | X86_CR0_PG; > unsigned long update_bits = X86_CR0_PG | X86_CR0_WP; > > cr0 |= X86_CR0_ET; > @@ -792,9 +793,9 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) > if ((cr0 & X86_CR0_PG) && !(cr0 & X86_CR0_PE)) > return 1; > > - if (!is_paging(vcpu) && (cr0 & X86_CR0_PG)) { > + if (cr0 & X86_CR0_PG) { > #ifdef CONFIG_X86_64 > - if ((vcpu->arch.efer & EFER_LME)) { > + if (!is_paging(vcpu) && (vcpu->arch.efer & EFER_LME)) { > int cs_db, cs_l; > > if (!is_pae(vcpu)) > @@ -804,8 +805,8 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) > return 1; > } else > #endif > - if (is_pae(vcpu) && !load_pdptrs(vcpu, vcpu->arch.walk_mmu, > - kvm_read_cr3(vcpu))) > + if (is_pae(vcpu) && ((cr0 ^ old_cr0) & pdptr_bits) && > + !load_pdptrs(vcpu, vcpu->arch.walk_mmu, kvm_read_cr3(vcpu))) > return 1; > } > > Queued, thanks. Paolo