On Tue, Mar 17, 2020 at 06:18:37PM +0100, Paolo Bonzini wrote: > On 17/03/20 05:52, Sean Christopherson wrote: > > diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c > > index d816f1366943..a77eab5b0e8a 100644 > > --- a/arch/x86/kvm/vmx/nested.c > > +++ b/arch/x86/kvm/vmx/nested.c > > @@ -1123,7 +1123,7 @@ static int nested_vmx_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, bool ne > > } > > > > if (!nested_ept) > > - kvm_mmu_new_cr3(vcpu, cr3, false); > > + kvm_mmu_new_cr3(vcpu, cr3, enable_ept); > > Even if enable_ept == false, we could have already scheduled or flushed > the TLB soon due to one of 1) nested_vmx_transition_tlb_flush 2) > vpid_sync_context in prepare_vmcs02 3) the processor doing it for > !enable_vpid. > > So for !enable_ept only KVM_REQ_MMU_SYNC is needed, not > KVM_REQ_TLB_FLUSH_CURRENT I think. Worth adding a TODO? Now that you point it out, I think it makes sense to unconditionally pass %true here, i.e. rely 100% on nested_vmx_transition_tlb_flush() to do the right thing.