Re: "KVM: x86/mmu: Overhaul TDP MMU zapping and flushing" breaks SVM on Hyper-V

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/13/23 19:05, Jeremi Piotrowski wrote:
So I looked at the ftrace (all kvm&kvmmu events + hyperv_nested_*
events) I see the following: With tdp_mmu=0: kvm_exit sequence of
kvm_mmu_prepare_zap_page hyperv_nested_flush_guest_mapping (always
follows every sequence of kvm_mmu_prepare_zap_page) kvm_entry

With tdp_mmu=1 I see: kvm_mmu_prepare_zap_page and
kvm_tdp_mmu_spte_changed events from a kworker context, but they are
not followed by hyperv_nested_flush_guest_mapping. The only
hyperv_nested_flush_guest_mapping events I see happen from the qemu
process context.

Also the number of flush hypercalls is significantly lower: a 7second
sequence through OVMF with tdp_mmu=0 produces ~270 flush hypercalls.
In the traces with tdp_mmu=1 I now see max 3.

So this might be easier to diagnose than I thought: the
HvCallFlushGuestPhysicalAddressSpace calls are missing now.

Can you check if KVM is reusing a nCR3 value?

If so, perhaps you can just add hyperv_flush_guest_mapping(__pa(root->spt), NULL) after kvm_tdp_mmu_get_vcpu_root_hpa's call to tdp_mmu_alloc_sp()?

Paolo




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux