On Wed, Sep 12, 2012 at 01:07:32PM -0700, Andrew Morton wrote: > On Tue, 11 Sep 2012 09:41:52 +0900 > Minchan Kim <minchan@xxxxxxxxxx> wrote: > > > This patch drops clean cache pages instead of migration during > > alloc_contig_range() to minimise allocation latency by reducing the amount > > of migration is necessary. It's useful for CMA because latency of migration > > is more important than evicting the background processes working set. > > In addition, as pages are reclaimed then fewer free pages for migration > > targets are required so it avoids memory reclaiming to get free pages, > > which is a contributory factor to increased latency. > > > > * from v1 > > * drop migrate_mode_t > > * add reclaim_clean_pages_from_list instad of MIGRATE_DISCARD support - Mel > > > > I measured elapsed time of __alloc_contig_migrate_range which migrates > > 10M in 40M movable zone in QEMU machine. > > > > Before - 146ms, After - 7ms > > > > ... > > > > @@ -758,7 +760,9 @@ static unsigned long shrink_page_list(struct list_head *page_list, > > wait_on_page_writeback(page); > > } > > > > - references = page_check_references(page, sc); > > + if (!force_reclaim) > > + references = page_check_references(page, sc); > > grumble. Could we please document `enum page_references' and > page_check_references()? > > And the `force_reclaim' arg could do with some documentation. It only > forces reclaim under certain circumstances. They should be described, > and a reson should be provided. I will give it a shot by another patch. > > Why didn't this patch use PAGEREF_RECLAIM_CLEAN? It is possible for > someone to dirty one of these pages after we tested its cleanness and > we'll then go off and write it out, but we won't be reclaiming it? Absolutely. Thanks Andrew! Here it goes. ====== 8< ====== >From 90022feb9ecf8e9a4efba7cbf49d7cead777020f Mon Sep 17 00:00:00 2001 From: Minchan Kim <minchan@xxxxxxxxxx> Date: Thu, 13 Sep 2012 08:45:58 +0900 Subject: [PATCH] mm: cma: reclaim only clean pages It is possible for pages to be dirty after the check in reclaim_clean_pages_from_list so that it ends up paging out the pages, which is never what we want for speed up. This patch fixes it. Cc: Marek Szyprowski <m.szyprowski@xxxxxxxxxxx> Cc: Michal Nazarewicz <mina86@xxxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxx> Signed-off-by: Minchan Kim <minchan@xxxxxxxxxx> --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index f8f56f8..1ee4b69 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -694,7 +694,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, struct address_space *mapping; struct page *page; int may_enter_fs; - enum page_references references = PAGEREF_RECLAIM; + enum page_references references = PAGEREF_RECLAIM_CLEAN; cond_resched(); -- 1.7.9.5 -- Kind regards, Minchan Kim -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>