Re: Bad performance since 5.9-rc1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 19, 2020, Zdenek Kaspar wrote:
> Hi,
> 
> in my initial report (https://marc.info/?l=kvm&m=160502183220080&w=2 -
> now fixed by c887c9b9ca62c051d339b1c7b796edf2724029ed) I saw degraded
> performance going back somewhere between v5.8 - v5.9-rc1.
> 
> OpenBSD 6.8 (GENERIC.MP) guest performance (time ./test-build.sh)
> good: 0m13.54s real     0m10.51s user     0m10.96s system
> bad : 6m20.07s real    11m42.93s user     0m13.57s system
> 
> bisected to first bad commit: 6b82ef2c9cf18a48726e4bb359aa9014632f6466

This is working as intended, in the sense that it's expected that guest
performance would go down the drain due to KVM being much more aggressive when
reclaiming shadow pages.  Prior to commit 6b82ef2c9cf1 ("KVM: x86/mmu: Batch zap
MMU pages when recycling oldest pages"), the zapping was completely anemic,
e.g. a few shadow pages would get zapped each call, without even really making a
dent in the memory consumed by KVM for shadow pages.

Any chance you can track down what is triggering KVM reclaim of shadow pages?
E.g. is KVM hitting its limit on the number of MMU pages and reclaiming via
make_mmu_pages_available()?  Or is the host under high memory pressure and
reclaiming memory via mmu_shrink_scan()?



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux