From: Michal Hocko <mhocko@xxxxxxx> mem_cgroup_from_task has always been a tricky API. It was added by 78fb74669e80 ("Memory controller: accounting setup") for mm_struct::mem_cgroup initialization. Later on it gained new callers mostly due to mm_struct::mem_cgroup -> mem_cgroup::owner transition and most users had to do mem_cgroup_from_task(mm->owner) to get the resulting memcg. Now that mm_struct::owner is gone this is not necessary, yet the API is still confusing. One tricky part has always been that the API sounds generic but it is not really. mem_cgroup_from_task(current) doesn't necessarily mean the same thing as current->mm->memcg (resp. mem_cgroup_from_task(current->mm->owner) previously) because mm might be associated with a different cgroup than the process. Another tricky part is that p->mm->memcg is unsafe if p!=current as pointed by Oleg because nobody is holding a reference on that mm. This is not a problem right now because we have only 2 callers in the tree. sock_update_memcg operates on current and task_in_mem_cgroup is providing non-NULL task so it is always using task_css. Let's ditch this function and use current->mm->memcg for sock_update_memcg and use task_css for task_in_mem_cgroup. This doesn't have any functional effect. Signed-off-by: Michal Hocko <mhocko@xxxxxxx> --- mm/memcontrol.c | 24 +++++++----------------- 1 file changed, 7 insertions(+), 17 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 4069ec8f52be..fb8e9bd04a29 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -294,18 +294,6 @@ static inline struct mem_cgroup *mem_cgroup_from_id(unsigned short id) return mem_cgroup_from_css(css); } -static struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p) -{ - if (p->mm) - return rcu_dereference(p->mm->memcg); - - /* - * If the process doesn't have mm struct anymore we have to fallback - * to the task_css. - */ - return mem_cgroup_from_css(task_css(p, memory_cgrp_id)); -} - /* Writing them here to avoid exposing memcg's inner layout */ #if defined(CONFIG_INET) && defined(CONFIG_MEMCG_KMEM) @@ -332,7 +320,7 @@ void sock_update_memcg(struct sock *sk) } rcu_read_lock(); - memcg = mem_cgroup_from_task(current); + memcg = rcu_dereference(current->mm->memcg); cg_proto = sk->sk_prot->proto_cgroup(memcg); if (cg_proto && memcg_proto_active(cg_proto) && css_tryget_online(&memcg->css)) { @@ -1091,12 +1079,14 @@ bool task_in_mem_cgroup(struct task_struct *task, struct mem_cgroup *memcg) task_unlock(p); } else { /* - * All threads may have already detached their mm's, but the oom - * killer still needs to detect if they have already been oom - * killed to prevent needlessly killing additional tasks. + * All threads have already detached their mm's but we should + * still be able to at least guess the original memcg from the + * task_css. These two will match most of the time but there are + * corner cases where task->mm and task_css refer to a different + * cgroups. */ rcu_read_lock(); - task_memcg = mem_cgroup_from_task(task); + task_memcg = mem_cgroup_from_css(task_css(task, memory_cgrp_id)); css_get(&task_memcg->css); rcu_read_unlock(); } -- 2.1.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>