On Tue, Mar 30, 2021 at 11:58:31AM -0700, Roman Gushchin wrote: > On Tue, Mar 30, 2021 at 11:34:11AM -0700, Shakeel Butt wrote: > > On Tue, Mar 30, 2021 at 3:20 AM Muchun Song <songmuchun@xxxxxxxxxxxxx> wrote: > > > > > > Since the following patchsets applied. All the kernel memory are charged > > > with the new APIs of obj_cgroup. > > > > > > [v17,00/19] The new cgroup slab memory controller > > > [v5,0/7] Use obj_cgroup APIs to charge kmem pages > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time - > > > it exists at a larger scale and is causing recurring problems in the real > > > world: page cache doesn't get reclaimed for a long time, or is used by the > > > second, third, fourth, ... instance of the same job that was restarted into > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > > > and make page reclaim very inefficient. > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction > > > to fix this problem, and then the LRU pages will not pin the memcgs. > > > > > > This patchset aims to make the LRU pages to drop the reference to memory > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > > > of the dying cgroups will not increase if we run the following test script. > > > > > > ```bash > > > #!/bin/bash > > > > > > cat /proc/cgroups | grep memory > > > > > > cd /sys/fs/cgroup/memory > > > > > > for i in range{1..500} > > > do > > > mkdir test > > > echo $$ > test/cgroup.procs > > > sleep 60 & > > > echo $$ > cgroup.procs > > > echo `cat test/cgroup.procs` > cgroup.procs > > > rmdir test > > > done > > > > > > cat /proc/cgroups | grep memory > > > ``` > > > > > > Patch 1 aims to fix page charging in page replacement. > > > Patch 2-5 are code cleanup and simplification. > > > Patch 6-15 convert LRU pages pin to the objcg direction. > > > > The main concern I have with *just* reparenting LRU pages is that for > > the long running systems, the root memcg will become a dumping ground. > > In addition a job running multiple times on a machine will see > > inconsistent memory usage if it re-accesses the file pages which were > > reparented to the root memcg. > > I agree, but also the reparenting is not the perfect thing in a combination > with any memory protections (e.g. memory.low). > > Imagine the following configuration: > workload.slice > - workload_gen_1.service memory.min = 30G > - workload_gen_2.service memory.min = 30G > - workload_gen_3.service memory.min = 30G > ... > > Parent cgroup and several generations of the child cgroup, protected by a memory.low. > Once the memory is getting reparented, it's not protected anymore. That doesn't sound right. A deleted cgroup today exerts no control over its abandoned pages. css_reset() will blow out any control settings. If you're talking about protection previously inherited by workload.slice, that continues to apply as it always has. None of this is really accidental. Per definition the workload.slice control domain includes workload_gen_1.service. And per definition, the workload_gen_1.service domain ceases to exist when you delete it. There are no (or shouldn't be any!) semantic changes from the physical unlinking from a dead control domain. > Also, I'm somewhat concerned about the interaction of the reparenting > with the writeback and dirty throttling. How does it work together? What interaction specifically? When you delete a cgroup that had both the block and the memory controller enabled, the control domain of both goes away and it becomes subject to whatever control domain is above it (if any). A higher control domain in turn takes a recursive view of the subtree, see mem_cgroup_wb_stats(), so when control is exerted, it applies regardless of how and where pages are physically linked in children. It's also already possible to enable e.g. block control only at a very high level and memory control down to a lower level. Per design this code can live with different domain sizes for memory and block.