Re: [patch] mm: skip rebalance of hopeless zones

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 9, 2010 at 11:01 AM, Andrew Morton
<akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
> On Thu, 9 Dec 2010 10:55:24 +0900 Minchan Kim <minchan.kim@xxxxxxxxx> wrote:
>
>> >> > leaves them to direct reclaim.
>> >>
>> >> Hi!
>> >>
>> >> We are experiencing a similar issue, though with a 757 MB Normal zone,
>> >> where kswapd tries to rebalance Normal after an order-3 allocation while
>> >> page cache allocations (order-0) keep splitting it back up again. __It can
>> >> run the whole day like this (SSD storage) without sleeping.
>> >
>> > People at google have told me they've seen the same thing. __A fork is
>> > taking 15 minutes when someone else is doing a dd, because the fork
>> > enters direct-reclaim trying for an order-one page. __It successfully
>> > frees some order-one pages but before it gets back to allocate one, dd
>> > has gone and stolen them, or split them apart.
>> >
>> > This problem would have got worse when slub came along doing its stupid
>> > unnecessary high-order allocations.
>> >
>> > Billions of years ago a direct-reclaimer had a one-deep cache in the
>> > task_struct into which it freed the page to prevent it from getting
>> > stolen.
>> >
>> > Later, we took that out because pages were being freed into the
>> > per-cpu-pages magazine, which is effectively task-local anyway. __But
>> > per-cpu-pages are only for order-0 pages. __See slub stupidity, above.
>> >
>> > I expect that this is happening so repeatably because the
>> > direct-reclaimer is dong a sleep somewhere after freeing the pages it
>> > needs - if it wasn't doing that then surely the window wouldn't be wide
>> > enough for it to happen so often. __But I didn't look.
>> >
>> > Suitable fixes might be
>> >
>> > a) don't go to sleep after the successful direct-reclaim.
>>
>> It can't make sure success since direct reclaim needs sleep with !GFP_AOMIC.
>
> It doesn't necessarily need to sleep *after* successfully freeing
> pages.  If it needs to sleep then do it before or during the freeing.
>
>> >
>> > b) reinstate the one-deep task-local free page cache.
>>
>> I like b) so how about this?
>> Just for the concept.
>>
>> @@ -1880,7 +1881,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask,
>> unsigned int order,
>>         reclaim_state.reclaimed_slab = 0;
>>         p->reclaim_state = &reclaim_state;
>>
>> -       *did_some_progress = try_to_free_pages(zonelist, order,
>> gfp_mask, nodemask);
>> +       *did_some_progress = try_to_free_pages(zonelist, order,
>> gfp_mask, nodemask, &ret_pages);
>>
>>         p->reclaim_state = NULL;
>>         lockdep_clear_current_reclaim_state();
>> @@ -1892,10 +1893,11 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask,
>> unsigned int order,
>>                 return NULL;
>>
>>  retry:
>> -       page = get_page_from_freelist(gfp_mask, nodemask, order,
>> -                                       zonelist, high_zoneidx,
>> -                                       alloc_flags, preferred_zone,
>> -                                       migratetype);
>> +       if(!list_empty(&ret_pages)) {
>> +               page = lru_to_page(ret_pages);
>> +               list_del(&page->lru);
>> +               free_page_list(&ret_pages);
>> +       }
>
> Maybe.  Or just pass a page*.
>

I did it more detailed but It's not completed and test at all.
Please consider just RFC.

-- CUT_HERE --

>From 6366ee75da30ed564291e897ef8e844f4a49d454 Mon Sep 17 00:00:00 2001
From: Minchan Kim <minchan.kim@xxxxxxxxx>
Date: Thu, 9 Dec 2010 14:01:32 +0900
Subject: [PATCH] Keep freed pages in direct reclaim

direct reclaimed process often sleep and race with other processes.
Although direct reclaim proceess requires high order pags(order > 0) and
reclaims page successfully, other processes which require order-0 page
could steal the high order page for direct reclaimed process.

After all, direct reclaimed process try it again and it still has a
possibility of above scenario. It can make bad effects following as

1. direct reclaimed process latency is big
2. eviction working set page due to lumpy reclaim
3. continue to wake up kswapd

This patch solves it.

Signed-off-by: Minchan Kim <minchan.kim@xxxxxxxxx>
---
 fs/buffer.c          |    2 +-
 include/linux/swap.h |    4 +++-
 mm/page_alloc.c      |   27 +++++++++++++++++++++++----
 mm/vmscan.c          |   23 +++++++++++++++++++----
 4 files changed, 46 insertions(+), 10 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 20a41c6..9eaf924 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -310,7 +310,7 @@ static void free_more_memory(void)
                                                &zone);
                if (zone)
                        try_to_free_pages(node_zonelist(nid, GFP_NOFS), 0,
-                                               GFP_NOFS, NULL);
+                                               GFP_NOFS, NULL, NULL);
        }
 }

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 84375e4..3fbf197 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -238,8 +238,10 @@ static inline void lru_cache_add_file(struct page *page)
 #define ISOLATE_BOTH 2         /* Isolate both active and inactive pages. */

 /* linux/mm/vmscan.c */
+extern noinline_for_stack void free_page_list(struct list_head *free_pages);
 extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
-                                       gfp_t gfp_mask, nodemask_t *mask);
+                                       gfp_t gfp_mask, nodemask_t *mask,
+                                       struct list_head *freed_pages);
 extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *mem,
                                                  gfp_t gfp_mask, bool noswap,
                                                  unsigned int swappiness);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1845a97..99ccc21 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1869,8 +1869,11 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask,
unsigned int order,
        struct page *page = NULL;
        struct reclaim_state reclaim_state;
        struct task_struct *p = current;
+       bool high_order;
        bool drained = false;
+       LIST_HEAD(freed_pages);

+       high_order = order ? true : false;
        cond_resched();

        /* We now go into synchronous reclaim */
@@ -1880,16 +1883,31 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask,
unsigned int order,
        reclaim_state.reclaimed_slab = 0;
        p->reclaim_state = &reclaim_state;

-       *did_some_progress = try_to_free_pages(zonelist, order,
gfp_mask, nodemask);
-
+       /*
+        * If request is high order, keep the pages which are reclaimed
+        * in own list for preventing the lose by other processes.
+        */
+       *did_some_progress = try_to_free_pages(zonelist, order, gfp_mask,
+                               nodemask, high_order ? &freed_pages : NULL);
        p->reclaim_state = NULL;
        lockdep_clear_current_reclaim_state();
        p->flags &= ~PF_MEMALLOC;

+       if (high_order && !list_empty(&freed_pages)) {
+               free_page_list(&freed_pages);
+               drain_all_pages();
+               drained = true;
+               page = get_page_from_freelist(gfp_mask, nodemask, order,
+                                       zonelist, high_zoneidx,
+                                       alloc_flags, preferred_zone,
+                                       migratetype);
+               if (page)
+                       goto out;
+       }
        cond_resched();

        if (unlikely(!(*did_some_progress)))
-               return NULL;
+               goto out;

 retry:
        page = get_page_from_freelist(gfp_mask, nodemask, order,
@@ -1906,7 +1924,8 @@ retry:
                drained = true;
                goto retry;
        }
-
+out:
+       VM_BUG_ON(!list_empty(&freed_pages));
        return page;
 }

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 42a4859..d71da90 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -111,6 +111,9 @@ struct scan_control {
         * are scanned.
         */
        nodemask_t      *nodemask;
+
+       /* keep freed pages */
+       struct list_head *freed_pages;
 };

 #define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru))
@@ -673,7 +676,7 @@ static enum page_references
page_check_references(struct page *page,
        return PAGEREF_RECLAIM;
 }

-static noinline_for_stack void free_page_list(struct list_head *free_pages)
+noinline_for_stack void free_page_list(struct list_head *free_pages)
 {
        struct pagevec freed_pvec;
        struct page *page, *tmp;
@@ -704,6 +707,10 @@ static unsigned long shrink_page_list(struct
list_head *page_list,
        unsigned long nr_dirty = 0;
        unsigned long nr_congested = 0;
        unsigned long nr_reclaimed = 0;
+       struct list_head *free_list = &free_pages;
+
+       if (sc->freed_pages)
+               free_list = sc->freed_pages;

        cond_resched();

@@ -896,7 +903,7 @@ free_it:
                 * Is there need to periodically free_page_list? It would
                 * appear not as the counts should be low
                 */
-               list_add(&page->lru, &free_pages);
+               list_add(&page->lru, sc->freed_pages);
                continue;

 cull_mlocked:
@@ -932,7 +939,13 @@ keep_lumpy:
        if (nr_dirty == nr_congested && nr_dirty != 0)
                zone_set_flag(zone, ZONE_CONGESTED);

-       free_page_list(&free_pages);
+       /*
+        * If reclaim is direct path and high order, caller should
+        * free reclaimed pages. It is for preventing reclaimed pages
+        * lose by other processes.
+        */
+       if (!sc->freed_pages)
+               free_page_list(&free_pages);

        list_splice(&ret_pages, page_list);
        count_vm_events(PGACTIVATE, pgactivate);
@@ -2094,7 +2107,8 @@ out:
 }

 unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
-                               gfp_t gfp_mask, nodemask_t *nodemask)
+                               gfp_t gfp_mask, nodemask_t *nodemask,
+                               struct list_head *freed_pages)
 {
        unsigned long nr_reclaimed;
        struct scan_control sc = {
@@ -2107,6 +2121,7 @@ unsigned long try_to_free_pages(struct zonelist
*zonelist, int order,
                .order = order,
                .mem_cgroup = NULL,
                .nodemask = nodemask,
+               .freed_pages = freed_pages,
        };

        trace_mm_vmscan_direct_reclaim_begin(order,
-- 
1.7.0.4


-- 
Kind regards,
Minchan Kim
From 6366ee75da30ed564291e897ef8e844f4a49d454 Mon Sep 17 00:00:00 2001
From: Minchan Kim <minchan.kim@xxxxxxxxx>
Date: Thu, 9 Dec 2010 14:01:32 +0900
Subject: [PATCH] Keep freed pages in direct reclaim

direct reclaimed process often sleep and race with other processes.
Although direct reclaim proceess requires high order pags(order > 0) and
reclaims page successfully, other processes which require order-0 page
could steal the high order page for direct reclaimed process.

After all, direct reclaimed process try it again and it still has a
possibility of above scenario. It can make bad effects following as

1. direct reclaimed process latency is big
2. eviction working set page due to lumpy reclaim
3. continue to wake up kswapd

This patch solves it.

Signed-off-by: Minchan Kim <minchan.kim@xxxxxxxxx>
---
 fs/buffer.c          |    2 +-
 include/linux/swap.h |    4 +++-
 mm/page_alloc.c      |   27 +++++++++++++++++++++++----
 mm/vmscan.c          |   23 +++++++++++++++++++----
 4 files changed, 46 insertions(+), 10 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 20a41c6..9eaf924 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -310,7 +310,7 @@ static void free_more_memory(void)
 						&zone);
 		if (zone)
 			try_to_free_pages(node_zonelist(nid, GFP_NOFS), 0,
-						GFP_NOFS, NULL);
+						GFP_NOFS, NULL, NULL);
 	}
 }
 
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 84375e4..3fbf197 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -238,8 +238,10 @@ static inline void lru_cache_add_file(struct page *page)
 #define ISOLATE_BOTH 2		/* Isolate both active and inactive pages. */
 
 /* linux/mm/vmscan.c */
+extern noinline_for_stack void free_page_list(struct list_head *free_pages);
 extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
-					gfp_t gfp_mask, nodemask_t *mask);
+					gfp_t gfp_mask, nodemask_t *mask,
+					struct list_head *freed_pages);
 extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *mem,
 						  gfp_t gfp_mask, bool noswap,
 						  unsigned int swappiness);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1845a97..99ccc21 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1869,8 +1869,11 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
 	struct page *page = NULL;
 	struct reclaim_state reclaim_state;
 	struct task_struct *p = current;
+	bool high_order;
 	bool drained = false;
+	LIST_HEAD(freed_pages);
 
+	high_order = order ? true : false;
 	cond_resched();
 
 	/* We now go into synchronous reclaim */
@@ -1880,16 +1883,31 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
 	reclaim_state.reclaimed_slab = 0;
 	p->reclaim_state = &reclaim_state;
 
-	*did_some_progress = try_to_free_pages(zonelist, order, gfp_mask, nodemask);
-
+	/*
+	 * If request is high order, keep the pages which are reclaimed
+	 * in own list for preventing the lose by other processes.
+	 */
+	*did_some_progress = try_to_free_pages(zonelist, order, gfp_mask,
+				nodemask, high_order ? &freed_pages : NULL);
 	p->reclaim_state = NULL;
 	lockdep_clear_current_reclaim_state();
 	p->flags &= ~PF_MEMALLOC;
 
+	if (high_order && !list_empty(&freed_pages)) {
+		free_page_list(&freed_pages);
+		drain_all_pages();
+		drained = true;
+		page = get_page_from_freelist(gfp_mask, nodemask, order,
+					zonelist, high_zoneidx,
+					alloc_flags, preferred_zone,
+					migratetype);
+		if (page)
+			goto out;
+	}
 	cond_resched();
 
 	if (unlikely(!(*did_some_progress)))
-		return NULL;
+		goto out;
 
 retry:
 	page = get_page_from_freelist(gfp_mask, nodemask, order,
@@ -1906,7 +1924,8 @@ retry:
 		drained = true;
 		goto retry;
 	}
-
+out:
+	VM_BUG_ON(!list_empty(&freed_pages));
 	return page;
 }
 
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 42a4859..d71da90 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -111,6 +111,9 @@ struct scan_control {
 	 * are scanned.
 	 */
 	nodemask_t	*nodemask;
+	
+	/* keep freed pages */
+	struct list_head *freed_pages;
 };
 
 #define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru))
@@ -673,7 +676,7 @@ static enum page_references page_check_references(struct page *page,
 	return PAGEREF_RECLAIM;
 }
 
-static noinline_for_stack void free_page_list(struct list_head *free_pages)
+noinline_for_stack void free_page_list(struct list_head *free_pages)
 {
 	struct pagevec freed_pvec;
 	struct page *page, *tmp;
@@ -704,6 +707,10 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 	unsigned long nr_dirty = 0;
 	unsigned long nr_congested = 0;
 	unsigned long nr_reclaimed = 0;
+	struct list_head *free_list = &free_pages;
+
+	if (sc->freed_pages)
+		free_list = sc->freed_pages;
 
 	cond_resched();
 
@@ -896,7 +903,7 @@ free_it:
 		 * Is there need to periodically free_page_list? It would
 		 * appear not as the counts should be low
 		 */
-		list_add(&page->lru, &free_pages);
+		list_add(&page->lru, sc->freed_pages);
 		continue;
 
 cull_mlocked:
@@ -932,7 +939,13 @@ keep_lumpy:
 	if (nr_dirty == nr_congested && nr_dirty != 0)
 		zone_set_flag(zone, ZONE_CONGESTED);
 
-	free_page_list(&free_pages);
+	/*
+	 * If reclaim is direct path and high order, caller should
+	 * free reclaimed pages. It is for preventing reclaimed pages
+	 * lose by other processes.
+	 */
+	if (!sc->freed_pages)
+		free_page_list(&free_pages);
 
 	list_splice(&ret_pages, page_list);
 	count_vm_events(PGACTIVATE, pgactivate);
@@ -2094,7 +2107,8 @@ out:
 }
 
 unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
-				gfp_t gfp_mask, nodemask_t *nodemask)
+				gfp_t gfp_mask, nodemask_t *nodemask,
+				struct list_head *freed_pages)
 {
 	unsigned long nr_reclaimed;
 	struct scan_control sc = {
@@ -2107,6 +2121,7 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
 		.order = order,
 		.mem_cgroup = NULL,
 		.nodemask = nodemask,
+		.freed_pages = freed_pages,
 	};
 
 	trace_mm_vmscan_direct_reclaim_begin(order,
-- 
1.7.0.4


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]