Hi Daniel! On Fri, Oct 05, 2018 at 10:16:25AM +0000, Daniel McGinnes wrote: > Hi Roman, > > memory pressure was started after 1 hour (Ran stress --vm 16 --vm-bytes > 1772864000 -t 300 for 5 minutes, then sleep for 5 mins in a continuous > loop). > > Machine has 16 cores & 32 GB RAM. > > I think the issue I still have is that even though the per-cpu is able to > be reused for other per-cpu allocations, my understanding is that it will > not be available for general use by applications - so if percpu memory > usage is growing continuously (which we still see happening pretty slowly > - but over months it would be fairly significant) it means there will be > less memory available for applications to use. Please let me know if I've > mis-understood something here. Well, yeah, not looking good. > > After seeing several stacks in IPv6 in the memory leak output I ran a test > with IPv6 disabled on the host. Interestingly after 24 hours the Percpu > memory reported in meminfo seems to have flattened out, whereas with IPv6 > enabled it was still growing. MemAvailable is decreasing so slowly that I > need to leave it longer to draw any conclusions from that. Looks like there is a independent per-cpu memory leak somewhere in the ipv6 stack. Not sure, of course, but if the number of dying cgroups is not growing... Id try to check that my memleak.py is actually capturing all allocs, maybe we're missing something... Thanks!