On Fri, Feb 07, 2020 at 09:18:07PM +0000, Chris Down wrote: > > @@ -6856,8 +6857,12 @@ int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm, > > } > > } > > > > - if (!memcg) > > - memcg = get_mem_cgroup_from_mm(mm); > > + if (!memcg) { > > + if (!mm) > > + memcg = get_mem_cgroup_from_current(); > > + else > > + memcg = get_mem_cgroup_from_mm(mm); > > + } > > Just to do due diligence, did we double check whether this results in any > unintentional shift in accounting for those passing in both mm and memcg as > NULL with no current->active_memcg set, since previously we never even tried > to consult current->mm and always used root_mem_cgroup in > get_mem_cgroup_from_mm? Excellent question on a subtle issue. But nobody actually passes NULL. They either pass current->mm (or a destination mm) in syscalls, or vma->vm_mm in page faults. The only times we end up with NULL is when kernel threads do something and have !current->mm. We redirect those to root_mem_cgroup. So this patch doesn't change those semantics.