On Mon, 23 Mar 2020 18:03:58 -0700 Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote: > On Mon, 23 Mar 2020 17:42:21 -0700 Roman Gushchin <guro@xxxxxx> wrote: > > > How about this one? I've merged them into one and stripped it a little bit. > > > > Thanks! > > > > Yes, that looks good. Here's the delta from the previously reviewed > version. I think it's valid to retain those acks and revewed-by's. > > And here's the altered "mm: memcg/slab: introduce mem_cgroup_from_obj()", which I have renamed to "mm: memcg/slab: use mem_cgroup_from_obj()": The end result is slightly different - mem_cgroup_from_obj() will now end up inside #ifdef CONFIG_MEMCG_KMEM. Should I undo that? From: Roman Gushchin <guro@xxxxxx> Subject: mm: memcg/slab: use mem_cgroup_from_obj() Sometimes we need to get a memcg pointer from a charged kernel object. The right way to get it depends on whether it's a proper slab object or it's backed by raw pages (e.g. it's a vmalloc alloction). In the first case the kmem_cache->memcg_params.memcg indirection should be used; in other cases it's just page->mem_cgroup. To simplify this task and hide the implementation details let's use the mem_cgroup_from_obj() helper, which takes a pointer to any kernel object and returns a valid memcg pointer or NULL. Passing a kernel address rather than a pointer to a page will allow to use this helper for per-object (rather than per-page) tracked objects in the future. The caller is still responsible to ensure that the returned memcg isn't going away underneath: take the rcu read lock, cgroup mutex etc; depending on the context. mem_cgroup_from_kmem() defined in mm/list_lru.c is now obsolete and can be removed. Link: http://lkml.kernel.org/r/20200117203609.3146239-1-guro@xxxxxx Signed-off-by: Roman Gushchin <guro@xxxxxx> Acked-by: Yafang Shao <laoar.shao@xxxxxxxxx> Reviewed-by: Shakeel Butt <shakeelb@xxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Vladimir Davydov <vdavydov.dev@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/list_lru.c | 12 +----------- mm/memcontrol.c | 5 ++--- 2 files changed, 3 insertions(+), 14 deletions(-) --- a/mm/list_lru.c~mm-memcg-slab-introduce-mem_cgroup_from_obj +++ a/mm/list_lru.c @@ -57,16 +57,6 @@ list_lru_from_memcg_idx(struct list_lru_ return &nlru->lru; } -static __always_inline struct mem_cgroup *mem_cgroup_from_kmem(void *ptr) -{ - struct page *page; - - if (!memcg_kmem_enabled()) - return NULL; - page = virt_to_head_page(ptr); - return memcg_from_slab_page(page); -} - static inline struct list_lru_one * list_lru_from_kmem(struct list_lru_node *nlru, void *ptr, struct mem_cgroup **memcg_ptr) @@ -77,7 +67,7 @@ list_lru_from_kmem(struct list_lru_node if (!nlru->memcg_lrus) goto out; - memcg = mem_cgroup_from_kmem(ptr); + memcg = mem_cgroup_from_obj(ptr); if (!memcg) goto out; --- a/mm/memcontrol.c~mm-memcg-slab-introduce-mem_cgroup_from_obj +++ a/mm/memcontrol.c @@ -759,13 +759,12 @@ void __mod_lruvec_state(struct lruvec *l void __mod_lruvec_slab_state(void *p, enum node_stat_item idx, int val) { - struct page *page = virt_to_head_page(p); - pg_data_t *pgdat = page_pgdat(page); + pg_data_t *pgdat = page_pgdat(virt_to_page(p)); struct mem_cgroup *memcg; struct lruvec *lruvec; rcu_read_lock(); - memcg = memcg_from_slab_page(page); + memcg = mem_cgroup_from_obj(p); /* Untracked pages have no memcg, no lruvec. Update only the node */ if (!memcg || memcg == root_mem_cgroup) { _