Re: Stalls in qemu with host running 6.1 (everything stuck at mmap_read_lock())

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Yu Zhao <yuzhao@xxxxxxxxxx> [230112 03:23]:
> On Wed, Jan 11, 2023 at 5:37 PM Pedro Falcato <pedro.falcato@xxxxxxxxx> wrote:
> >
> > On Wed, Jan 11, 2023 at 8:00 AM Jiri Slaby <jirislaby@xxxxxxxxxx> wrote:
> > >
> > > Hi,
> > >
> > > after I updated the host from 6.0 to 6.1 (being at 6.1.4 ATM), my qemu
> > > VMs started stalling (and the host at the same point too). It doesn't
> > > happen right after boot, maybe a suspend-resume cycle is needed (or
> > > longer uptime, or a couple of qemu VM starts, or ...). But when it
> > > happens, it happens all the time till the next reboot.
> > >
> > > Older guest's kernels/distros are affected as well as Win10.

...

> > >
> > > There should be enough free memory (note caches at 8G):
> > >                 total        used        free      shared  buff/cache
> > > available
> > > Mem:            15Gi        10Gi       400Mi       2,5Gi       8,0Gi
> > >    5,0Gi
> > > Swap:             0B          0B          0B
> > >
> > >
> > > I rmmoded kvm-intel now, so:
> > >    qemu-kvm: failed to initialize kvm: No such file or directory
> > >    qemu-kvm: falling back to tcg
> > > and it behaves the same (more or less expected).
> > >
...

> > Some data I've gathered:
> > 1) It seems to not happen right after booting - I'm unsure if this is
> > due to memory pressure or less CPU load or any other factor
> > 2) It seems to intensify after swapping a fair amount? At least this
> > has been my experience.
> > 3) The largest slowdown seems to be when qemu is booting the guest,
> > possibly during heavy memory allocation - problems range from "takes
> > tens of seconds to boot" to "qemu is completely blocked and needs a
> > SIGKILL spam".
> > 4) While traditional process monitoring tools break (likely due to
> > mmap_lock getting hogged), I can (empirically, using /bin/free) tell
> > that the system seems to be swapping in/out quite a fair bit
> >
> > My 4) is particularly confusing to me as I had originally blamed the
> > problem on the MGLRU changes, while you don't seem to be swapping at
> > all.
> > Could this be related to the maple tree patches? Should we CC both the
> > MGLRU folks and the maple folks?

I think we all monitor the linux-mm list, but a direct CC would not
hurt.

> 
> I don't think it's MGLRU because the way it uses mmap_lock is very
> simple. Also you could prevent MGLRU from taking mmap_lock by echo 3
> >/sys/kernel/mm/lru_gen/enabled, or you could disable MGLRU entirely
> by echoing 0 to the same file, or even at build time, to rule it out.
> (I assume you
> turned on MGLRU in the first place.)
> 
> Adding Liam. He can speak for the maple tree.

Thanks Yu (and Vlastimil) for the Cc on this issue.  My changes to the
mmap_lock were certainly not trivial and so it could be something I've
done in regards to the maple tree or the changes to the mm code for the
maple tree..

There might be a relevant patch [1] in the mm-unstable (Cc'ed to stable)
which could affect to the maple tree while under memory pressure.

The bug [2] would manifest itself in returning a range below the
requested allocation window, or the incorrect return code.  This could
certainly cause applications to misbehave, although it is not obvious to
me why the mmap_lock would remain held if this is the issue.

1. https://lore.kernel.org/linux-mm/20230111200136.1851322-1-Liam.Howlett@xxxxxxxxxx/
2. https://bugzilla.kernel.org/show_bug.cgi?id=216911

Thanks,
Liam



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux