On Wed, 2025-02-05 at 18:24 +0000, Yosry Ahmed wrote: > TLB_CONTROL is reset to TLB_CONTROL_DO_NOTHING on nested transitions to > L2. This is unnecessary because it should always be > TLB_CONTROL_DO_NOTHING at this point. > > The flow for setting TLB_CONTROL is as follows: > 1. In vcpu_enter_guest(), servicing a TLB flush request may set it to > TLB_CONTROL_FLUSH_ASID in svm_flush_tlb_asid(). > 2. In svm_vcpu_run() -> pre_svm_run(), it may get upgraded to > TLB_CONTROL_FLUSH_ALL_ASID when assigning a new ASID. > 3. In svm_cpu_run(), it gets reset to TLB_CONTROL_DO_NOTHING after the > guest is run. > > Hence, TLB_CONTROL is reset after each run and there is no need to do it > again on every nested transition to L2. > > There is a TODO in nested_svm_transition_tlb_flush() about this reset > crushing pending TLB flushes. Remove it, as the reset is not really > crushing anything as explained above. I am not sure that we don't crush a pending tlb request: svm_flush_tlb_asid can also be called by KVM_REQ_TLB_FLUSH and set the flush request in both vmcbs, thus later the nested_svm_exit_tlb_flush can crush this request. But the patch itself does look OK to me, although I might be mistaken. Reviewed-by: Maxim Levitsky <mlevitsk@xxxxxxxxxx> Best regards, Maxim Levitsky > > Signed-off-by: Yosry Ahmed <yosry.ahmed@xxxxxxxxx> > --- > arch/x86/kvm/svm/nested.c | 12 ++---------- > 1 file changed, 2 insertions(+), 10 deletions(-) > > diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c > index 12bb391884299..8e40ff21f7353 100644 > --- a/arch/x86/kvm/svm/nested.c > +++ b/arch/x86/kvm/svm/nested.c > @@ -512,12 +512,7 @@ static void nested_svm_entry_tlb_flush(struct kvm_vcpu *vcpu) > svm->nested.last_asid = svm->nested.ctl.asid; > kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu); > } > - /* > - * TODO: optimize unconditional TLB flush/MMU sync. A partial list of > - * things to fix before this can be conditional: > - * > - * - Don't crush a pending TLB flush in vmcb02 on nested VMRUN > - */ > + /* TODO: optimize unconditional TLB flush/MMU sync */ > kvm_make_request(KVM_REQ_MMU_SYNC, vcpu); > kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu); > } > @@ -536,7 +531,7 @@ static void nested_svm_exit_tlb_flush(struct kvm_vcpu *vcpu) > if (svm->nested.ctl.tlb_ctl == TLB_CONTROL_FLUSH_ALL_ASID) > kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu); > > - /* See nested_svm_entry_tlb_flush() */ > + /* TODO: optimize unconditional TLB flush/MMU sync */ > kvm_make_request(KVM_REQ_MMU_SYNC, vcpu); > kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu); > } > @@ -717,9 +712,6 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm, > > /* Done at vmrun: asid. */ > > - /* Also overwritten later if necessary. */ > - svm_clear_tlb_ctl_flush(vmcb02); > - > /* nested_cr3. */ > if (nested_npt_enabled(svm)) > nested_svm_init_mmu_context(vcpu);