On Wed, Apr 7, 2021 at 4:55 AM Michal Hocko <mhocko@xxxxxxxx> wrote: > > On Mon 05-04-21 11:18:48, Bharata B Rao wrote: > > Hi, > > > > When running 10000 (more-or-less-empty-)containers on a bare-metal Power9 > > server(160 CPUs, 2 NUMA nodes, 256G memory), it is seen that memory > > consumption increases quite a lot (around 172G) when the containers are > > running. Most of it comes from slab (149G) and within slab, the majority of > > it comes from kmalloc-32 cache (102G) > > Is this 10k cgroups a testing enviroment or does anybody really use that > in production? I would be really curious to hear how that behaves when > those containers are not idle. E.g. global memory reclaim iterating over > 10k memcgs will likely be very visible. I do remember playing with > similar setups few years back and the overhead was very high. > -- I can tell about our environment. Couple of thousands of memcgs (~2k) are very normal on our machines as machines can be running 100+ jobs (and each job can manage their own sub-memcgs). However each job can have a high number of mounts. There is no local disk and each package of the job is remotely mounted (a bit more complicated). We do have issues with global memory reclaim but mostly the proactive reclaim makes the global reclaim a tail issue (and at tail it often does create havoc).