On Sun, Jul 22, 2018 at 11:44 PM Michal Hocko <mhocko@xxxxxxxxxx> wrote: > > On Thu 19-07-18 09:23:10, Shakeel Butt wrote: > > On Thu, Jul 19, 2018 at 3:43 AM Michal Hocko <mhocko@xxxxxxxxxx> wrote: > > > > > > [CC Andrew] > > > > > > On Thu 19-07-18 18:06:47, Jing Xia wrote: > > > > It was reported that a kernel crash happened in mem_cgroup_iter(), > > > > which can be triggered if the legacy cgroup-v1 non-hierarchical > > > > mode is used. > > > > > > > > Unable to handle kernel paging request at virtual address 6b6b6b6b6b6b8f > > > > ...... > > > > Call trace: > > > > mem_cgroup_iter+0x2e0/0x6d4 > > > > shrink_zone+0x8c/0x324 > > > > balance_pgdat+0x450/0x640 > > > > kswapd+0x130/0x4b8 > > > > kthread+0xe8/0xfc > > > > ret_from_fork+0x10/0x20 > > > > > > > > mem_cgroup_iter(): > > > > ...... > > > > if (css_tryget(css)) <-- crash here > > > > break; > > > > ...... > > > > > > > > The crashing reason is that mem_cgroup_iter() uses the memcg object > > > > whose pointer is stored in iter->position, which has been freed before > > > > and filled with POISON_FREE(0x6b). > > > > > > > > And the root cause of the use-after-free issue is that > > > > invalidate_reclaim_iterators() fails to reset the value of > > > > iter->position to NULL when the css of the memcg is released in non- > > > > hierarchical mode. > > > > > > Well, spotted! > > > > > > I suspect > > > Fixes: 6df38689e0e9 ("mm: memcontrol: fix possible memcg leak due to interrupted reclaim") > > > > > > but maybe it goes further into past. I also suggest > > > Cc: stable > > > > > > even though the non-hierarchical mode is strongly discouraged. > > > > Why not set root_mem_cgroup's use_hierarchy to true by default on > > init? If someone wants non-hierarchical mode, they can explicitly set > > it to false. > > We do not change defaults under users feet usually. Then how non-hierarchical mode is being discouraged currently? I don't see any comments in the docs. Shakeel -- To unsubscribe from this list: send the line "unsubscribe cgroups" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html