Hi Johannes, On Fri, Nov 8, 2013 at 1:14 AM, Johannes Weiner <hannes@xxxxxxxxxxx> wrote: > So how about this? this patch seems to fix my issue I reported you some weeks ago with a oom looping issue. Tested on a 3.10.x > --- > From: Johannes Weiner <hannes@xxxxxxxxxxx> > Subject: [patch] mm: memcg: reparent charges during css_free() > > Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx> > Cc: stable@xxxxxxxxxx # 3.8+ > --- > mm/memcontrol.c | 29 ++++++++++++++++++++++++++++- > 1 file changed, 28 insertions(+), 1 deletion(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index cc4f9cbe760e..3dce2b50891c 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -6341,7 +6341,34 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) > static void mem_cgroup_css_free(struct cgroup_subsys_state *css) > { > struct mem_cgroup *memcg = mem_cgroup_from_css(css); > - > + /* > + * XXX: css_offline() would be where we should reparent all > + * memory to prepare the cgroup for destruction. However, > + * memcg does not do css_tryget() and res_counter charging > + * under the same RCU lock region, which means that charging > + * could race with offlining, potentially leaking charges and > + * sending out pages with stale cgroup pointers: > + * > + * #0 #1 > + * rcu_read_lock() > + * css_tryget() > + * rcu_read_unlock() > + * disable css_tryget() > + * call_rcu() > + * offline_css() > + * reparent_charges() > + * res_counter_charge() > + * css_put() > + * css_free() > + * pc->mem_cgroup = dead memcg > + * add page to lru > + * > + * We still reparent most charges in offline_css() simply > + * because we don't want all these pages stuck if a long-term > + * reference like a swap entry is holding on to the cgroup > + * past offlining, but make sure we catch any raced charges: > + */ > + mem_cgroup_reparent_charges(memcg); > memcg_destroy_kmem(memcg); > __mem_cgroup_free(memcg); > } > -- > 1.8.4.2 -- William -- To unsubscribe from this list: send the line "unsubscribe cgroups" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html