On 13/02/2023 20:56, Paolo Bonzini wrote: > On Mon, Feb 13, 2023 at 8:12 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: >>> My reading of the spec[1] is that HV_X64_NESTED_ENLIGHTENED_TLB will cause >>> svm_flush_tlb_current to behave (in Intel parlance) as an INVVPID rather >>> than an INVEPT. >> >> Oh! Good catch! Yeah, that'll be a problem. >> >>> So svm_flush_tlb_current has to be changed to also add a >>> call to HvCallFlushGuestPhysicalAddressSpace. I'm not sure if that's a good >>> idea though. >> >> That's not strictly necessary, e.g. flushes from kvm_invalidate_pcid() and >> kvm_post_set_cr4() don't need to effect a full flush. I believe the virtual >> address flush is also sufficient for avic_activate_vmcb(). Nested (from KVM's >> perspective, i.e. running L3) can just be mutually exclusive with >> HV_X64_NESTED_ENLIGHTENED_TLB. >> >> That just leaves kvm_mmu_new_pgd()'s force_flush_and_sync_on_reuse and the >> aforementioned kvm_mmu_load(). >> >> That said, the above cases where a virtual address flush is sufficient are >> rare operations when using NPT, so adding a new KVM_REQ_TLB_FLUSH_ROOT or >> whatever probably isn't worth doing. >> >>> First, that's a TLB shootdown rather than just a local thing; >>> flush_tlb_current is supposed to be relatively cheap, and there would be a >>> lot of them because of the unconditional calls to >>> nested_svm_transition_tlb_flush on vmentry/vmexit. >> >> This isn't a nested scenario for KVM though. > > Yes, but svm_flush_tlb_current() *is* also used in nested scenarios so > it's like you said below---you would have to disable enlightened TLB > when EFER.SVME=1 or something like that. > >>> Depending on the performance results of adding the hypercall to >>> svm_flush_tlb_current, the fix could indeed be to just disable usage of >>> HV_X64_NESTED_ENLIGHTENED_TLB. >> >> Minus making nested SVM (L3) mutually exclusive, I believe this will do the trick: >> >> + /* blah blah blah */ >> + hv_flush_tlb_current(vcpu); >> + > > Yes, it's either this or disabling the feature. > > Paolo Combining the two sub-threads: both of the suggestions: a) adding a hyperv_flush_guest_mapping(__pa(root->spt) after kvm_tdp_mmu_get_vcpu_root_hpa's call to tdp_mmu_alloc_sp() b) adding a hyperv_flush_guest_mapping(vcpu->arch.mmu->root.hpa) to svm_flush_tlb_current() appear to work in my test case (L2 vm startup until panic due to missing rootfs). But in both these cases (and also when I completely disable HV_X64_NESTED_ENLIGHTENED_TLB) the runtime of an iteration of the test is noticeably longer compared to tdp_mmu=0. So in terms of performance the ranking is (fastest to slowest): 1. tdp_mmu=0 + enlightened TLB 2. tdp_mmu=0 + no enlightened TLB 3. tdp_mmu=1 (enlightened TLB makes minimal difference)