On Wed, Apr 27, 2011 at 5:39 PM, KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: > On Wed, 27 Apr 2011 01:25:25 +0900 > Minchan Kim <minchan.kim@xxxxxxxxx> wrote: > >> Compaction is good solution to get contiguos page but it makes >> LRU churing which is not good. >> This patch makes that compaction code use in-order putback so >> after compaction completion, migrated pages are keeping LRU ordering. >> >> Cc: KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxxxxxxx> >> Cc: Mel Gorman <mgorman@xxxxxxx> >> Cc: Rik van Riel <riel@xxxxxxxxxx> >> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> >> Signed-off-by: Minchan Kim <minchan.kim@xxxxxxxxx> >> --- >> Âmm/compaction.c | Â 22 +++++++++++++++------- >> Â1 files changed, 15 insertions(+), 7 deletions(-) >> >> diff --git a/mm/compaction.c b/mm/compaction.c >> index a2f6e96..480d2ac 100644 >> --- a/mm/compaction.c >> +++ b/mm/compaction.c >> @@ -211,11 +211,11 @@ static void isolate_freepages(struct zone *zone, >> Â/* Update the number of anon and file isolated pages in the zone */ >> Âstatic void acct_isolated(struct zone *zone, struct compact_control *cc) >> Â{ >> - Â Â struct page *page; >> + Â Â struct pages_lru *pages_lru; >> Â Â Â unsigned int count[NR_LRU_LISTS] = { 0, }; >> >> - Â Â list_for_each_entry(page, &cc->migratepages, lru) { >> - Â Â Â Â Â Â int lru = page_lru_base_type(page); >> + Â Â list_for_each_entry(pages_lru, &cc->migratepages, lru) { >> + Â Â Â Â Â Â int lru = page_lru_base_type(pages_lru->page); >> Â Â Â Â Â Â Â count[lru]++; >> Â Â Â } >> >> @@ -281,6 +281,7 @@ static unsigned long isolate_migratepages(struct zone *zone, >> Â Â Â spin_lock_irq(&zone->lru_lock); >> Â Â Â for (; low_pfn < end_pfn; low_pfn++) { >> Â Â Â Â Â Â Â struct page *page; >> + Â Â Â Â Â Â struct pages_lru *pages_lru; >> Â Â Â Â Â Â Â bool locked = true; >> >> Â Â Â Â Â Â Â /* give a chance to irqs before checking need_resched() */ >> @@ -334,10 +335,16 @@ static unsigned long isolate_migratepages(struct zone *zone, >> Â Â Â Â Â Â Â Â Â Â Â continue; >> Â Â Â Â Â Â Â } >> >> + Â Â Â Â Â Â pages_lru = kmalloc(sizeof(struct pages_lru), GFP_ATOMIC); >> + Â Â Â Â Â Â if (pages_lru) >> + Â Â Â Â Â Â Â Â Â Â continue; > > Hmm, can't we use fixed size of statically allocated pages_lru, per-node or > per-zone ? I think using kmalloc() in memory reclaim path is risky. Yes. we can enhance it with pagevec-like approach. It's my TODO list. :) In compaction POV, it is used by reclaiming big order pages so most of time order-0 pages are enough. It's basic assumption of compaction so it shouldn't be a problem. Thanks for the review, Kame. -- Kind regards, Minchan Kim -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href