Re: [Bug 218259] New: High latency in KVM guests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Dec 12, 2023, bugzilla-daemon@xxxxxxxxxx wrote:
> The affected hosts run Debian 12; until Debian 11 there was no trouble.
> I git-bisected the kernel and the commit which appears to somehow cause the
> trouble is:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=f47e5bbbc92f5d234bbab317523c64a65b6ac4e2

Huh.  That commit makes it so that KVM keeps non-leaf SPTEs, i.e. upper level page
table structures, when zapping/unmapping a guest memory range.  The idea is that
preserving paging structures will allow for faster unmapping (less work) and faster
repopulation if/when the guest faults the memory back in (again, less work to create
a valid mapping).

The only downside that comes to mind is that keeping upper level paging structures
will make it more costly to handle future invalidations as KVM will have to walk
deeper into the page tables before discovering more work that needs to be done.

> Qemu command line: See below.
> Problem does *not* go away when appending "kernel_irqchip=off" to the -machine
> parameter
> Problem *does* go away with "-accel tcg", even though the guest becomes much
> slower.

Yeah, that's expected, as that completely takes KVM out of the picture.

> All affected guests run kubernetes with various workloads, mostly Java,
> databases like postgres und a few legacy 32 bit containers.
> 
> Best method to manually trigger the problem I found was to drain other
> kubernetes nodes, causing many pods to start at the same time on the affected
> guest. But even when the initial load settled, there's little I/O and the
> guest is like 80% idle, the problem still occurs.
> 
> The problem occurs whether the host runs only a single guest or lots of other
> (non-kubernetes) guests.
> 
> Other (i.e. not kubernetes) guests don't appear to be affected, but those got
> way less resources and usually less load.

The affected flows are used only for handling mmu_notifier invalidations and for
edge cases related to non-coherent DMA.  I don't see any passthrough devices in
your setup, so that rules out the non-coherent DMA side of things.

A few things to try:

 1. Disable KSM (if enabled)

        echo 0 > /sys/kernel/mm/ksm/run

 2. Disable NUMA autobalancing (if enabled):

        echo 0 > /proc/sys/kernel/numa_balancing

 3. Disable KVM's TDP MMU.  On pre-v6.3 kernels, this can be done without having
    to reload KVM (or reboot the kernel if KVM is builtin).

        echo N > /sys/module/kvm/parameters/tdp_mmu

    On v6.3 and later kernels, tdp_mmu is a read-only module param and so needs
    to be disable when loading kvm.ko or when booting the kernel.

There are plenty more things that can be tried, but the above are relatively easy
and will hopefully narrow down the search significantly.

Oh, and one question: is your host kernel preemptible?




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux