Re: PROBLEM: Memory leaking when running kubernetes cronjobs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi guys,

the latest test with the latest patches (and IPv6 re-enabled) has been 
running for ~23 hours and so far the results are looking GREAT!

MemAvailable has stabilised, as has Percpu memory. The memleak output no 
longer contains those IPv6 stacks (or anything else that looks 
significant). I'm going to let it run for another day just to be 100% sure 
the issue is resolved.

Thank you so much to Roman & Mike for the excellent support & patches.

I've attached several percpu_users output from during the test (The suffix 
is the number of seconds since the start of the test - so there is a 
reasonable spread throughout the test so far)
-> 

Just to recap I'm running with the following patches:

Kernel 4.19.rc3 + the following:

https://lkml.org/lkml/2018/10/7/84
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ce7ea4af0838ffd4667ecad4eb5eec7a25342f1e
https://marc.info/?l=linux-netdev&m=153900037804969

010cb21d4ede math64: prevent double calculation of DIV64_U64_ROUND_UP() 
arguments
f77d7a05670d mm: don't miss the last page because of round-off error
d18bf0af683e mm: drain memcg stocks on css offlining
71cd51b2e1ca mm: rework memcg kernel stack accounting
f3a2fccbce15 mm: slowly shrink slabs with a relatively small number of 
objects

I also have the following applied, but these were more for debug than 
actually fixing the issue.

A debug patch from Roman to print cgroup stats for v1 cgroups.
A debug patch from Mike to collect debug info about percpu_users
CONFIG_PERCPU_STATS kernel parameter enabled.

Now need to apply these to 4.15 Kernel and check the issue is still 
resolved there :-O If anyone knows of any other fixes in this area that 
went in between 4.15 & 4.19.rc3 that would be handy to know :-)

thanks,

Dan McGinnes

IBM Cloud - Containers performance

Int Tel: 247359        Ext Tel: 01962 817359

Notes: Daniel McGinnes/UK/IBM
Email: MCGINNES@xxxxxxxxxx

IBM (UK) Ltd, Hursley Park,Winchester,Hampshire, SO21 2JN



From:   Roman Gushchin <guro@xxxxxx>
To:     Mike Rapoport <rppt@xxxxxxxxxxxxxxxxxx>
Cc:     Daniel McGinnes <MCGINNES@xxxxxxxxxx>, "cgroups@xxxxxxxxxxxxxxx" 
<cgroups@xxxxxxxxxxxxxxx>, Nathaniel Rockwell <nrockwell@xxxxxxxxxx>
Date:   08/10/2018 16:35
Subject:        Re: PROBLEM: Memory leaking when running kubernetes 
cronjobs



On Mon, Oct 08, 2018 at 10:05:28AM +0300, Mike Rapoport wrote:
> On Sat, Oct 06, 2018 at 12:42:37AM +0000, Roman Gushchin wrote:
> > Hi Daniel!
> > 
> > On Fri, Oct 05, 2018 at 10:16:25AM +0000, Daniel McGinnes wrote:
> > > Hi Roman,
> > > 
> > > memory pressure was started after 1 hour (Ran stress --vm 16 
--vm-bytes 
> > > 1772864000 -t 300 for 5 minutes, then sleep for 5 mins in a 
continuous 
> > > loop).
> > > 
> > > Machine has 16 cores & 32 GB RAM.
> > > 
> > > I think the issue I still have is that even though the per-cpu is 
able to 
> > > be reused for other per-cpu allocations, my understanding is that it 
will 
> > > not be available for general use by applications - so if percpu 
memory 
> > > usage is growing continuously (which we still see happening pretty 
slowly 
> > > - but over months it would be fairly significant) it means there 
will be 
> > > less memory available for applications to use. Please let me know if 
I've 
> > > mis-understood something here.
> > 
> > Well, yeah, not looking good.
> > 
> > > 
> > > After seeing several stacks in IPv6 in the memory leak output I ran 
a test 
> > > with IPv6 disabled on the host. Interestingly after 24 hours the 
Percpu 
> > > memory reported in meminfo seems to have flattened out, whereas with 
IPv6 
> > > enabled it was still growing. MemAvailable is decreasing so slowly 
that I 
> > > need to leave it longer to draw any conclusions from that.
> > 
> > Looks like there is a independent per-cpu memory leak somewhere in the 
ipv6
> > stack. Not sure, of course, but if the number of dying cgroups is not 
growing...
> 
> There is a leak in the percpu allocator itself, it never frees some of 
its
> metadata. I've sent the fix yesterday [1], I believe it will be merged 
in
> 4.19.

Perfect catch!

> 
> Also, there was a recent fix for a leak ipv6 [2].
> 
> I'm now trying to see the dynamics of the  percpu allocations, so I've
> added yet another debugfs interface for percpu (below) similar to
> /proc/vmallocinfo. I hope that by the end of the day I'll be able to see
> what is causing to increase in percpu memory.

Really looking forward for Daniel's test results: hopefully the leak will
be gone at this point.

Thanks!




Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

Attachment: percpu_users.tar.gz
Description: GNU Zip compressed data


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux