On 16/02/2023 15:40, Jeremi Piotrowski wrote: > On 15/02/2023 23:16, Sean Christopherson wrote: >> On Tue, Feb 14, 2023, Jeremi Piotrowski wrote: >>> On 13/02/2023 20:56, Paolo Bonzini wrote: >>>> On Mon, Feb 13, 2023 at 8:12 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: >>>>>> Depending on the performance results of adding the hypercall to >>>>>> svm_flush_tlb_current, the fix could indeed be to just disable usage of >>>>>> HV_X64_NESTED_ENLIGHTENED_TLB. >>>>> >>>>> Minus making nested SVM (L3) mutually exclusive, I believe this will do the trick: >>>>> >>>>> + /* blah blah blah */ >>>>> + hv_flush_tlb_current(vcpu); >>>>> + >>>> >>>> Yes, it's either this or disabling the feature. >>>> >>>> Paolo >>> >>> Combining the two sub-threads: both of the suggestions: >>> >>> a) adding a hyperv_flush_guest_mapping(__pa(root->spt) after kvm_tdp_mmu_get_vcpu_root_hpa's call to tdp_mmu_alloc_sp() >>> b) adding a hyperv_flush_guest_mapping(vcpu->arch.mmu->root.hpa) to svm_flush_tlb_current() >>> >>> appear to work in my test case (L2 vm startup until panic due to missing rootfs). >>> >>> But in both these cases (and also when I completely disable HV_X64_NESTED_ENLIGHTENED_TLB) >>> the runtime of an iteration of the test is noticeably longer compared to tdp_mmu=0. >> >> Hmm, what is test doing? > > Booting through OVMF and kernel with no rootfs provided, and panic=-1 specified on the > kernel command line. It's a pure startup time test. > Hi Sean, Have you been able to reproduce this by any chance? I would be glad to see either of the two fixes getting merged (b) or a) if it doesn't require special L3 nested handling) in order to get this regression resolved. Jeremi