Re: PROBLEM: Memory leaking when running kubernetes cronjobs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Roman,

I've booted with your cgroup.stat patch applied (and verified I can now 
see the nr_dying_descendants stat)and am ready to run again, but I don't 
seem to have /sys/kernel/debug/percpu_stats available. Any ideas how to 
enable this, it would be good to have this available to confirm your 
theory for this run?

thanks,

Dan McGinnes

IBM Cloud - Containers performance

Int Tel: 247359        Ext Tel: 01962 817359

Notes: Daniel McGinnes/UK/IBM
Email: MCGINNES@xxxxxxxxxx

IBM (UK) Ltd, Hursley Park,Winchester,Hampshire, SO21 2JN



From:   Roman Gushchin <guro@xxxxxx>
To:     Daniel McGinnes <MCGINNES@xxxxxxxxxx>
Cc:     "cgroups@xxxxxxxxxxxxxxx" <cgroups@xxxxxxxxxxxxxxx>, Nathaniel 
Rockwell <nrockwell@xxxxxxxxxx>
Date:   25/09/2018 13:58
Subject:        Re: PROBLEM: Memory leaking when running kubernetes 
cronjobs



Hi Daniel!

On Tue, Sep 25, 2018 at 09:15:20AM +0000, Daniel McGinnes wrote:
> Hi Roman,
> 
> I left it running with the patches for a longer period (with no memory 
> pressure), and it leaked at very similar rate to without the patches.
> 
> I then started applying some memory pressure (Ran stress --vm 16 
> --vm-bytes 1772864000 -t 300 for 5 minutes, then sleep for 5 mins in a 
> continuous loop).
> 
> When I started running stress the MemAvailable increased by ~2GB - but 
> this still left ~4GB "leaked". The interesting thing is, I left the test 

> running whilst the memory pressure loop was running for ~ 1 day so far, 
> and it looks like no additional memory has been "leaked". I have tried 
> drop_caches and it doesn't clear any additional memory. So it looks to 
me 
> like if memory pressure is applied whilst the test is running it doesn't 

> leak additional memory - but it won't free up all memory that was 
> previously leaked.. How does this match with 
> your expected behaviour?

Yeah, this is pretty match what I was expecting.

I'd bet that if you'll look at the number of dying cgroups, you'll find 
that my
patches caused them to be reclaimed once the memory pressure was applied 
(either
after or during the test).

But in the case, when you apply the memory pressure after, the 
fragmentation in
the per-cpu memory allocator is the reason why you are not getting the 
memory
back. It's not a real "leak", the memory can be perfectly used for per-cpu
allocations (had you create a big number of cgroups again, for example).

If you have per-cpu stats available on your host 
(/sys/kernel/debug/percpu_stats),
please, post it, as it can confirm or refute my theory.

Thanks!




Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux