On Tue, Aug 18, 2020 at 10:24 AM Sean Christopherson <sean.j.christopherson@xxxxxxxxx> wrote: > > On Tue, Aug 18, 2020 at 10:14:39AM -0700, Jim Mattson wrote: > > On Tue, Aug 18, 2020 at 8:20 AM Sean Christopherson > > <sean.j.christopherson@xxxxxxxxx> wrote: > > > > > I'd prefer to handle this on the switch from L2->L1. It avoids adding a > > > kvm_x86_ops and yet another sequence of four VMWRITEs, e.g. I think this > > > will do the trick. > > > > > > diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c > > > index 9c74a732b08d..67465f0ca1b9 100644 > > > --- a/arch/x86/kvm/vmx/nested.c > > > +++ b/arch/x86/kvm/vmx/nested.c > > > @@ -4356,6 +4356,9 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason, > > > if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu)) > > > kvm_vcpu_flush_tlb_current(vcpu); > > > > > > + if (enable_ept && is_pae_paging(vcpu)) > > > + ept_load_pdptrs(vcpu); > > > + > > > > Are the mmu->pdptrs[] guaranteed to be valid at this point? If L2 has > > PAE paging enabled, and it has modified CR3 without a VM-exit, where > > are the current PDPTE values read from the vmcs02 into mmu->pdptrs[]? > > ept_load_pdptrs() checks kvm_register_is_dirty(vcpu, VCPU_EXREG_PDPTR). The > idea is basically the same as the above TLB_FLUSH_CURRENT; process pending > requests and/or dirty state for L2 before switching to L1. Thanks. Is it right to conclude that if we get to the end of nested_vmx_vmexit, and vcpu->arch.regs_dirty is non-zero, then something is amiss?