Re: Bad performance since 5.9-rc1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Dec 18, 2020, Zdenek Kaspar wrote:
> > without: kvm: x86/mmu: Fix get_mmio_spte() on CPUs supporting 5-level
> > PT I can run guest again, but with degraded performance as before.
> > 
> > Z.
> 
> With: KVM: x86/mmu: Bug fixes and cleanups in get_mmio_spte() series

Apologies, I completely missed your bug report for the get_mmio_spte() bugs.

> I can run guest again and performance is slightly better:
> 
> v5.8:        0m13.54s real     0m10.51s user     0m10.96s system
> v5.9:        6m20.07s real    11m42.93s user     0m13.57s system
> v5.10+fixes: 5m50.77s real    10m38.29s user     0m15.96s system
> 
> perf top from host when guest (openbsd) is compiling:
>   26.85%  [kernel]                  [k] queued_spin_lock_slowpath
>    8.49%  [kvm]                     [k] mmu_page_zap_pte
>    7.47%  [kvm]                     [k] __kvm_mmu_prepare_zap_page
>    3.61%  [kernel]                  [k] clear_page_rep
>    2.43%  [kernel]                  [k] page_counter_uncharge
>    2.30%  [kvm]                     [k] paging64_page_fault
>    2.03%  [kvm_intel]               [k] vmx_vcpu_run
>    2.02%  [kvm]                     [k] kvm_vcpu_gfn_to_memslot
>    1.95%  [kernel]                  [k] internal_get_user_pages_fast
>    1.64%  [kvm]                     [k] kvm_mmu_get_page
>    1.55%  [kernel]                  [k] page_counter_try_charge
>    1.33%  [kernel]                  [k] propagate_protected_usage
>    1.29%  [kvm]                     [k] kvm_arch_vcpu_ioctl_run
>    1.13%  [kernel]                  [k] get_page_from_freelist
>    1.01%  [kvm]                     [k] paging64_walk_addr_generic
>    0.83%  [kernel]                  [k] ___slab_alloc.constprop.0
>    0.83%  [kernel]                  [k] kmem_cache_free
>    0.82%  [kvm]                     [k] __pte_list_remove
>    0.77%  [kernel]                  [k] try_grab_compound_head
>    0.76%  [kvm_intel]               [k] 0x000000000001cfa0
>    0.74%  [kvm]                     [k] pte_list_add

Can you try running with this debug hack to understand what is causing KVM to
zap shadow pages?  The expected behavior is that you'll get backtraces for the
first five cases where KVM zaps valid shadow pages.  Compile tested only.


diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 5dfe0ede0e81..c5da993ac753 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2404,6 +2404,8 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
        }
 }

+static unsigned long zapped_warns;
+
 static unsigned long kvm_mmu_zap_oldest_mmu_pages(struct kvm *kvm,
                                                  unsigned long nr_to_zap)
 {
@@ -2435,6 +2437,8 @@ static unsigned long kvm_mmu_zap_oldest_mmu_pages(struct kvm *kvm,
                        goto restart;
        }

+       WARN_ON(total_zapped && zapped_warns++ < 5);
+
        kvm_mmu_commit_zap_page(kvm, &invalid_list);

        kvm->stat.mmu_recycled += total_zapped;



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux