On 06/20/2014 11:49 PM, Vlastimil Babka wrote: > Compaction scanners try to lock zone locks as late as possible by checking > many page or pageblock properties opportunistically without lock and skipping > them if not unsuitable. For pages that pass the initial checks, some properties > have to be checked again safely under lock. However, if the lock was already > held from a previous iteration in the initial checks, the rechecks are > unnecessary. > > This patch therefore skips the rechecks when the lock was already held. This is > now possible to do, since we don't (potentially) drop and reacquire the lock > between the initial checks and the safe rechecks anymore. > > Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> > Acked-by: Minchan Kim <minchan@xxxxxxxxxx> > Cc: Mel Gorman <mgorman@xxxxxxx> > Cc: Michal Nazarewicz <mina86@xxxxxxxxxx> > Cc: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx> > Cc: Christoph Lameter <cl@xxxxxxxxx> > Cc: Rik van Riel <riel@xxxxxxxxxx> > Acked-by: David Rientjes <rientjes@xxxxxxxxxx> Reviewed-by: Zhang Yanfei <zhangyanfei@xxxxxxxxxxxxxx> > --- > mm/compaction.c | 53 +++++++++++++++++++++++++++++++---------------------- > 1 file changed, 31 insertions(+), 22 deletions(-) > > diff --git a/mm/compaction.c b/mm/compaction.c > index 40da812..9f6e857 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -324,22 +324,30 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, > goto isolate_fail; > > /* > - * The zone lock must be held to isolate freepages. > - * Unfortunately this is a very coarse lock and can be > - * heavily contended if there are parallel allocations > - * or parallel compactions. For async compaction do not > - * spin on the lock and we acquire the lock as late as > - * possible. > + * If we already hold the lock, we can skip some rechecking. > + * Note that if we hold the lock now, checked_pageblock was > + * already set in some previous iteration (or strict is true), > + * so it is correct to skip the suitable migration target > + * recheck as well. > */ > - if (!locked) > + if (!locked) { > + /* > + * The zone lock must be held to isolate freepages. > + * Unfortunately this is a very coarse lock and can be > + * heavily contended if there are parallel allocations > + * or parallel compactions. For async compaction do not > + * spin on the lock and we acquire the lock as late as > + * possible. > + */ > locked = compact_trylock_irqsave(&cc->zone->lock, > &flags, cc); > - if (!locked) > - break; > + if (!locked) > + break; > > - /* Recheck this is a buddy page under lock */ > - if (!PageBuddy(page)) > - goto isolate_fail; > + /* Recheck this is a buddy page under lock */ > + if (!PageBuddy(page)) > + goto isolate_fail; > + } > > /* Found a free page, break it into order-0 pages */ > isolated = split_free_page(page); > @@ -623,19 +631,20 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc, > page_count(page) > page_mapcount(page)) > continue; > > - /* If the lock is not held, try to take it */ > - if (!locked) > + /* If we already hold the lock, we can skip some rechecking */ > + if (!locked) { > locked = compact_trylock_irqsave(&zone->lru_lock, > &flags, cc); > - if (!locked) > - break; > + if (!locked) > + break; > > - /* Recheck PageLRU and PageTransHuge under lock */ > - if (!PageLRU(page)) > - continue; > - if (PageTransHuge(page)) { > - low_pfn += (1 << compound_order(page)) - 1; > - continue; > + /* Recheck PageLRU and PageTransHuge under lock */ > + if (!PageLRU(page)) > + continue; > + if (PageTransHuge(page)) { > + low_pfn += (1 << compound_order(page)) - 1; > + continue; > + } > } > > lruvec = mem_cgroup_page_lruvec(page, zone); > -- Thanks. Zhang Yanfei -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>