On Wed, Oct 04, 2017 at 02:24:26PM -0700, Shakeel Butt wrote: > >> > + if (memcg_has_children(iter)) > >> > + continue; > >> > >> && iter != root_mem_cgroup ? > > > > Oh, sure. I had a stupid bug in my test script, which prevented me from > > catching this. Thanks! > > > > This should fix the problem. > > -- > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index 2e82625bd354..b3848bce4c86 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -2807,7 +2807,8 @@ static void select_victim_memcg(struct mem_cgroup *root, struct oom_control *oc) > > * We don't consider non-leaf non-oom_group memory cgroups > > * as OOM victims. > > */ > > - if (memcg_has_children(iter) && !mem_cgroup_oom_group(iter)) > > + if (memcg_has_children(iter) && iter != root_mem_cgroup && > > + !mem_cgroup_oom_group(iter)) > > continue; > > I think you are mixing the 3rd and 4th patch. The root_mem_cgroup > check should be in 3rd while oom_group stuff should be in 4th. > Right. This "patch" should fix them both, it was just confusing to send two patches. I'll split it before final landing. > > >> > >> Shouldn't there be a CSS_ONLINE check? Also instead of css_get at the > >> end why not css_tryget_online() here and css_put for the previous > >> selected one. > > > > Hm, why do we need to check this? I do not see, how we can choose > > an OFFLINE memcg as a victim, tbh. Please, explain the problem. > > > > Sorry about the confusion. There are two things. First, should we do a > css_get on the newly selected memcg within the for loop when we still > have a reference to it? We're holding rcu_read_lock, it should be enough. We're bumping css counter just before releasing rcu lock. > > Second, for the OFFLINE memcg, you are right oom_evaluate_memcg() will > return 0 for offlined memcgs. Maybe no need to call > oom_evaluate_memcg() for offlined memcgs. Sounds like a good optimization, which can be done on top of the current patchset. Thank you! -- To unsubscribe from this list: send the line "unsubscribe cgroups" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html