Hi Joonsoo,
On 04/09/2013 09:21 AM, Joonsoo Kim wrote:
Currently, freed pages via rcu is not counted for reclaimed_slab, because
it is freed in rcu context, not current task context. But, this free is
initiated by this task, so counting this into this task's reclaimed_slab
is meaningful to decide whether we continue reclaim, or not.
So change code to count these pages for this task's reclaimed_slab.
Cc: Christoph Lameter <cl@xxxxxxxxxxxxxxxxxxxx>
Cc: Pekka Enberg <penberg@xxxxxxxxxx>
Cc: Matt Mackall <mpm@xxxxxxxxxxx>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
diff --git a/mm/slub.c b/mm/slub.c
index 4aec537..16fd2d5 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1409,8 +1409,6 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
memcg_release_pages(s, order);
page_mapcount_reset(page);
- if (current->reclaim_state)
- current->reclaim_state->reclaimed_slab += pages;
__free_memcg_kmem_pages(page, order);
}
@@ -1431,6 +1429,8 @@ static void rcu_free_slab(struct rcu_head *h)
static void free_slab(struct kmem_cache *s, struct page *page)
{
+ int pages = 1 << compound_order(page);
One question irrelevant this patch. Why slab cache can use compound
page(hugetlbfs pages/thp pages)? They are just used by app to optimize
tlb miss, is it?
+
if (unlikely(s->flags & SLAB_DESTROY_BY_RCU)) {
struct rcu_head *head;
@@ -1450,6 +1450,9 @@ static void free_slab(struct kmem_cache *s, struct page *page)
call_rcu(head, rcu_free_slab);
} else
__free_slab(s, page);
+
+ if (current->reclaim_state)
+ current->reclaim_state->reclaimed_slab += pages;
}
static void discard_slab(struct kmem_cache *s, struct page *page)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>