[PATCH] mm: verify page type before getting memcg from it

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Per disccusion with Dave[1], we always assume we only ever put objects from
memcg associated slab pages in the list_lru. In list_lru_from_kmem() it
calls memcg_from_slab_page() which makes no attempt to verify the page is
actually a slab page. But currently the binder coder (in
drivers/android/binder_alloc.c) stores normal pages in the list_lru, rather
than slab objects. The only reason the binder doesn't catch issue is that
the list_lru is not configured to be memcg aware.
In order to make it more stable, we should verify the page type before
getting memcg from it. In this patch, a new helper is introduced and the
old helper is modified. Now we have two helpers as bellow,

struct mem_cgroup *__memcg_from_slab_page(struct page *page);
struct mem_cgroup *memcg_from_slab_page(struct page *page);

The first helper is used when we are sure the page is a slab page and also
a head page, while the second helper is used when we are not sure the page
type.

[1].
https://lore.kernel.org/linux-mm/20200106213103.GJ23195@xxxxxxxxxxxxxxxxxxx/

Suggested-by: Dave Chinner <david@xxxxxxxxxxxxx>
Signed-off-by: Yafang Shao <laoar.shao@xxxxxxxxx>
---
 mm/memcontrol.c |  7 ++-----
 mm/slab.h       | 24 +++++++++++++++++++++++-
 2 files changed, 25 insertions(+), 6 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 9bd4ea7..7658b8e 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -460,10 +460,7 @@ ino_t page_cgroup_ino(struct page *page)
 	unsigned long ino = 0;
 
 	rcu_read_lock();
-	if (PageSlab(page) && !PageTail(page))
-		memcg = memcg_from_slab_page(page);
-	else
-		memcg = READ_ONCE(page->mem_cgroup);
+	memcg = memcg_from_slab_page(page);
 	while (memcg && !(memcg->css.flags & CSS_ONLINE))
 		memcg = parent_mem_cgroup(memcg);
 	if (memcg)
@@ -748,7 +745,7 @@ void __mod_lruvec_slab_state(void *p, enum node_stat_item idx, int val)
 	struct lruvec *lruvec;
 
 	rcu_read_lock();
-	memcg = memcg_from_slab_page(page);
+	memcg = __memcg_from_slab_page(page);
 
 	/* Untracked pages have no memcg, no lruvec. Update only the node */
 	if (!memcg || memcg == root_mem_cgroup) {
diff --git a/mm/slab.h b/mm/slab.h
index 7e94700..2444ae4 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -329,7 +329,7 @@ static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s)
  * The kmem_cache can be reparented asynchronously. The caller must ensure
  * the memcg lifetime, e.g. by taking rcu_read_lock() or cgroup_mutex.
  */
-static inline struct mem_cgroup *memcg_from_slab_page(struct page *page)
+static inline struct mem_cgroup *__memcg_from_slab_page(struct page *page)
 {
 	struct kmem_cache *s;
 
@@ -341,6 +341,23 @@ static inline struct mem_cgroup *memcg_from_slab_page(struct page *page)
 }
 
 /*
+ * If we are not sure whether the page can pass PageSlab() && !PageTail(),
+ * we should use this function. That's the difference between this helper
+ * and the above one.
+ */
+static inline struct mem_cgroup *memcg_from_slab_page(struct page *page)
+{
+	struct mem_cgroup *memcg;
+
+	if (PageSlab(page) && !PageTail(page))
+		memcg = __memcg_from_slab_page(page);
+	else
+		memcg = READ_ONCE(page->mem_cgroup);
+
+	return memcg;
+}
+
+/*
  * Charge the slab page belonging to the non-root kmem_cache.
  * Can be called for non-root kmem_caches only.
  */
@@ -438,6 +455,11 @@ static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s)
 	return s;
 }
 
+static inline struct mem_cgroup *__memcg_from_slab_page(struct page *page)
+{
+	return NULL;
+}
+
 static inline struct mem_cgroup *memcg_from_slab_page(struct page *page)
 {
 	return NULL;
-- 
1.8.3.1





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux