Re: [For Stable] mm: memcontrol: fix excessive complexity in memory.stat reporting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 24, 2019 at 11:53 AM Greg KH <gregkh@xxxxxxxxxxxxxxxxxxx> wrote:
>
>
> A: Because it messes up the order in which people normally read text.
> Q: Why is top-posting such a bad thing?
> A: Top-posting.
> Q: What is the most annoying thing in e-mail?
>
> A: No.
> Q: Should I include quotations after my reply?
>
> http://daringfireball.net/2007/07/on_top
>
> On Wed, Apr 24, 2019 at 10:35:51AM -0700, Vaibhav Rustagi wrote:
> > Apologies for sending a non-plain text e-mail previously.
> >
> > This issue is encountered in the actual production environment by our
> > customers where they are constantly creating containers
> > and tearing them down (using kubernetes for the workload).  Kubernetes
> > constantly reads the memory.stat file for accounting memory
> > information and over time (around a week) the memcg's got accumulated
> > and the response time for reading memory.stat increases and
> > customer applications get affected.
>
> Please define "affected".  Their apps still run properly, so all should
> be fine, it would be kubernetes that sees the slowdowns, not the
> application.  How exactly does this show up to an end-user?
>

Over time as the zombie cgroups get accumulated, kubelet (process
doing frequent memory.stat) becomes more cpu resource intensive and
all other user containers running on the same machine will starve for
cpu. It affects the user containers in at-least 2 ways that we know
of: (1) User experience liveness probe failures where there
applications are not completed in expected amount of time. (2) new
user jobs cannot be schedule,
There certainly is a possibilty of reducing the adverse affect at
Kubernetes level as well, and we are investigating that as well. But,
the kernel patches requested helps in not exacerbating the problem.

> > The repro steps mentioned previously was just used for testing the
> > patches locally.
> >
> > Yes, we are moving to 4.19 but are also supporting 4.14 till Jan 2020
> > (so production environment will still contain 4.14 kernel)
>
> If you are already moving to 4.19, this seems like a good as reason as
> any (hint, I can give you more) to move off of 4.14 at this point in
> time.  There's no real need to keep 4.14 around, given that you don't
> have any out-of-tree code in your kernels, so all should be simple to
> just update the next reboot, right?
>

Based on the past experiences, major kernel upgrade sometime
introduces new regressions as well. So while we are working to roll
out kernel 4.19, it may not be a practical solution for all the users.

> thanks,
>
> greg k-h

Thanks,
Vaibhav



[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux