Re: [PATCH] mm: verify page type before getting memcg from it

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 17, 2020 at 12:19 AM Roman Gushchin <guro@xxxxxx> wrote:
>
> On Thu, Jan 16, 2020 at 04:50:56PM +0100, Michal Hocko wrote:
> > [Cc Roman]
>
> Thanks!
>
> >
> > On Thu 16-01-20 09:10:11, Yafang Shao wrote:
> > > Per disccusion with Dave[1], we always assume we only ever put objects from
> > > memcg associated slab pages in the list_lru. In list_lru_from_kmem() it
> > > calls memcg_from_slab_page() which makes no attempt to verify the page is
> > > actually a slab page. But currently the binder coder (in
> > > drivers/android/binder_alloc.c) stores normal pages in the list_lru, rather
> > > than slab objects. The only reason the binder doesn't catch issue is that
> > > the list_lru is not configured to be memcg aware.
> > > In order to make it more stable, we should verify the page type before
> > > getting memcg from it. In this patch, a new helper is introduced and the
> > > old helper is modified. Now we have two helpers as bellow,
> > >
> > > struct mem_cgroup *__memcg_from_slab_page(struct page *page);
> > > struct mem_cgroup *memcg_from_slab_page(struct page *page);
> > >
> > > The first helper is used when we are sure the page is a slab page and also
> > > a head page, while the second helper is used when we are not sure the page
> > > type.
> > >
> > > [1].
> > > https://lore.kernel.org/linux-mm/20200106213103.GJ23195@xxxxxxxxxxxxxxxxxxx/
> > >
> > > Suggested-by: Dave Chinner <david@xxxxxxxxxxxxx>
> > > Signed-off-by: Yafang Shao <laoar.shao@xxxxxxxxx>
>
> Hello Yafang!
>
> I actually have something similar in my patch queue, but I'm adding
> a helper which takes a kernel pointer rather than a page:
>   struct mem_cgroup *mem_cgroup_from_obj(void *p);
>
> Will it work for you? If so, I can send it separately.
>

Yes, it fixes the issue as well. Pls. send it separately.

> (I'm working on switching to per-object accounting of slab object,
> so that slab pages will be shared between multiple cgroups. So it will
> require a change like this).
>
> Thanks!
>
> --
>
> From fc2b1ec53285edcb0017275019d60bd577bf64a9 Mon Sep 17 00:00:00 2001
> From: Roman Gushchin <guro@xxxxxx>
> Date: Thu, 2 Jan 2020 15:22:19 -0800
> Subject: [PATCH] mm: memcg/slab: introduce mem_cgroup_from_obj()
>
> Sometimes we need to get a memcg pointer from a charged kernel object.
> The right way to do it depends on whether it's a proper slab object
> or it's backed by raw pages (e.g. it's a vmalloc alloction). In the
> first case the kmem_cache->memcg_params.memcg indirection should be
> used, however in the the second case it's just page->mem_cgroup.
>
> To simplify this task and hide these implementation details let's
> introduce the mem_cgroup_from_obj() helper, which takes a pointer
> to any kernel object and returns a valid memcg pointer or NULL.
>
> The caller is still responsible to ensure that the returned memcg
> isn't going away underneath: take the rcu read lock, cgroup mutex etc.
>
> mem_cgroup_from_kmem() defined in mm/list_lru.c is now obsolete
> and can be removed.
>
> Signed-off-by: Roman Gushchin <guro@xxxxxx>

Acked-by: Yafang Shao <laoar.shao@xxxxxxxxx>

> ---
>  include/linux/memcontrol.h |  7 +++++++
>  mm/list_lru.c              | 12 +-----------
>  mm/memcontrol.c            | 32 +++++++++++++++++++++++++++++---
>  3 files changed, 37 insertions(+), 14 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index c372bed6be80..0f6f8e18029e 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -420,6 +420,8 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *, struct pglist_data *);
>
>  struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p);
>
> +struct mem_cgroup *mem_cgroup_from_obj(void *p);
> +
>  struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm);
>
>  struct mem_cgroup *get_mem_cgroup_from_page(struct page *page);
> @@ -912,6 +914,11 @@ static inline bool mm_match_cgroup(struct mm_struct *mm,
>         return true;
>  }
>
> +static inline struct mem_cgroup *mem_cgroup_from_obj(void *p)
> +{
> +       return NULL;
> +}
> +
>  static inline struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
>  {
>         return NULL;
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index 0f1f6b06b7f3..8de5e3784ee4 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -57,16 +57,6 @@ list_lru_from_memcg_idx(struct list_lru_node *nlru, int idx)
>         return &nlru->lru;
>  }
>
> -static __always_inline struct mem_cgroup *mem_cgroup_from_kmem(void *ptr)
> -{
> -       struct page *page;
> -
> -       if (!memcg_kmem_enabled())
> -               return NULL;
> -       page = virt_to_head_page(ptr);
> -       return memcg_from_slab_page(page);
> -}
> -
>  static inline struct list_lru_one *
>  list_lru_from_kmem(struct list_lru_node *nlru, void *ptr,
>                    struct mem_cgroup **memcg_ptr)
> @@ -77,7 +67,7 @@ list_lru_from_kmem(struct list_lru_node *nlru, void *ptr,
>         if (!nlru->memcg_lrus)
>                 goto out;
>
> -       memcg = mem_cgroup_from_kmem(ptr);
> +       memcg = mem_cgroup_from_obj(ptr);
>         if (!memcg)
>                 goto out;
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 6e1ee8577ecf..99d6fe9d7026 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -757,13 +757,12 @@ void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
>
>  void __mod_lruvec_slab_state(void *p, enum node_stat_item idx, int val)
>  {
> -       struct page *page = virt_to_head_page(p);
> -       pg_data_t *pgdat = page_pgdat(page);
> +       pg_data_t *pgdat = page_pgdat(virt_to_page(p));
>         struct mem_cgroup *memcg;
>         struct lruvec *lruvec;
>
>         rcu_read_lock();
> -       memcg = memcg_from_slab_page(page);
> +       memcg = mem_cgroup_from_obj(p);
>
>         /* Untracked pages have no memcg, no lruvec. Update only the node */
>         if (!memcg || memcg == root_mem_cgroup) {
> @@ -2636,6 +2635,33 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg,
>                 unlock_page_lru(page, isolated);
>  }
>
> +/*
> + * Returns a pointer to the memory cgroup to which the kernel object is charged.
> + *
> + * The caller must ensure the memcg lifetime, e.g. by owning a charged object,
> + * taking rcu_read_lock() or cgroup_mutex.
> + */
> +struct mem_cgroup *mem_cgroup_from_obj(void *p)
> +{
> +       struct page *page;
> +
> +       if (mem_cgroup_disabled())
> +               return NULL;
> +
> +       page = virt_to_head_page(p);
> +
> +       /*
> +        * Slab pages don't have page->mem_cgroup set because corresponding
> +        * kmem caches can be reparented during the lifetime. That's why
> +        * cache->memcg_params.memcg pointer should be used instead.
> +        */
> +       if (PageSlab(page))
> +               return memcg_from_slab_page(page);
> +
> +       /* All other pages use page->mem_cgroup */
> +       return page->mem_cgroup;
> +}
> +
>  #ifdef CONFIG_MEMCG_KMEM
>  static int memcg_alloc_cache_id(void)
>  {
> --
> 2.21.1
>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux