Re: Stalls in qemu with host running 6.1 (everything stuck at mmap_read_lock())

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 11, 2023 at 8:00 AM Jiri Slaby <jirislaby@xxxxxxxxxx> wrote:
>
> Hi,
>
> after I updated the host from 6.0 to 6.1 (being at 6.1.4 ATM), my qemu
> VMs started stalling (and the host at the same point too). It doesn't
> happen right after boot, maybe a suspend-resume cycle is needed (or
> longer uptime, or a couple of qemu VM starts, or ...). But when it
> happens, it happens all the time till the next reboot.
>
> Older guest's kernels/distros are affected as well as Win10.
>
> In guests, I see for example stalls in memset_orig or
> smp_call_function_many_cond -- traces below.
>
> qemu-kvm-7.1.0-13.34.x86_64 from openSUSE.
>
> It's quite interesting that:
>    $ cat /proc/<PID_OF_QEMU>/cmdline
> is stuck at read:
>
> openat(AT_FDCWD, "/proc/12239/cmdline", O_RDONLY) = 3
> newfstatat(3, "", {st_mode=S_IFREG|0444, st_size=0, ...}, AT_EMPTY_PATH) = 0
> fadvise64(3, 0, 0, POSIX_FADV_SEQUENTIAL) = 0
> mmap(NULL, 139264, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
> 0) = 0x7f22f0487000
> read(3, ^C^C^C^\^C
>
> too. So I dumped blocked tasks (sysrq-w) on _host_ (see below) and
> everything seems to stall on mmap_read_lock() or
> mmap_write_lock_killable(). I don't see the hog (the one actually
> _having_ and sitting on the (presumably write) lock) in the dump though.
> I will perhaps boot a LOCKDEP-enabled kernel, so that I can do sysrq-d
> next time and see the holder.
>
>
> There should be enough free memory (note caches at 8G):
>                 total        used        free      shared  buff/cache
> available
> Mem:            15Gi        10Gi       400Mi       2,5Gi       8,0Gi
>    5,0Gi
> Swap:             0B          0B          0B
>
>
> I rmmoded kvm-intel now, so:
>    qemu-kvm: failed to initialize kvm: No such file or directory
>    qemu-kvm: falling back to tcg
> and it behaves the same (more or less expected).
>
> Is this known? Any idea how to debug this? Or maybe someone (I CCed a
> couple of guys who Acked mmap_*_lock() shuffling patches in 6.1) has a
> clue? Bisection is hard as it reproduces only under certain unknown
> circumstances.

Hi,

I just want to chime in and say that I've also hit this regression
right as I (Arch) updated to 6.1 a few weeks ago.
This completely ruined my qemu workflow such that I had to fallback to
using an LTS kernel.

Some data I've gathered:
1) It seems to not happen right after booting - I'm unsure if this is
due to memory pressure or less CPU load or any other factor
2) It seems to intensify after swapping a fair amount? At least this
has been my experience.
3) The largest slowdown seems to be when qemu is booting the guest,
possibly during heavy memory allocation - problems range from "takes
tens of seconds to boot" to "qemu is completely blocked and needs a
SIGKILL spam".
4) While traditional process monitoring tools break (likely due to
mmap_lock getting hogged), I can (empirically, using /bin/free) tell
that the system seems to be swapping in/out quite a fair bit

My 4) is particularly confusing to me as I had originally blamed the
problem on the MGLRU changes, while you don't seem to be swapping at
all.
Could this be related to the maple tree patches? Should we CC both the
MGLRU folks and the maple folks?

I have little insight into what the kernel's state actually is apart
from this - perf seems to break, and I have no kernel debugger as this
is my live personal machine :/
I would love it if someone hinted to possible things I/we could try in
order to track this down. Is this not git-bisectable at all?

Thanks,
Pedro




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux