We have only a few places where we actually want to charge kmem so instead of intruding into the general page allocation path with __GFP_KMEMCG it's better to explictly charge kmem there. All kmem charges will be easier to follow that way. This is a step toward removing __GFP_KMEMCG. It makes fork charge task threadinfo pages explicitly instead of passing __GFP_KMEMCG to alloc_pages. Signed-off-by: Vladimir Davydov <vdavydov@xxxxxxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxx> Cc: Glauber Costa <glommer@xxxxxxxxx> --- kernel/fork.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/kernel/fork.c b/kernel/fork.c index f4b09bc15f3a..8209780cf732 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -150,15 +150,22 @@ void __weak arch_release_thread_info(struct thread_info *ti) static struct thread_info *alloc_thread_info_node(struct task_struct *tsk, int node) { - struct page *page = alloc_pages_node(node, THREADINFO_GFP_ACCOUNTED, - THREAD_SIZE_ORDER); + struct page *page; + struct mem_cgroup *memcg = NULL; + if (!memcg_kmem_newpage_charge(THREADINFO_GFP_ACCOUNTED, &memcg, + THREAD_SIZE_ORDER)) + return NULL; + page = alloc_pages_node(node, THREADINFO_GFP, THREAD_SIZE_ORDER); + memcg_kmem_commit_charge(page, memcg, THREAD_SIZE_ORDER); return page ? page_address(page) : NULL; } static inline void free_thread_info(struct thread_info *ti) { - free_memcg_kmem_pages((unsigned long)ti, THREAD_SIZE_ORDER); + if (ti) + memcg_kmem_uncharge_pages(virt_to_page(ti), THREAD_SIZE_ORDER); + free_pages((unsigned long)ti, THREAD_SIZE_ORDER); } # else static struct kmem_cache *thread_info_cache; -- 1.7.10.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>