* Ying Han <yinghan@xxxxxxxxxx> [2011-04-28 15:37:05]: > We recently added the change in global background reclaim which > counts the return value of soft_limit reclaim. Now this patch adds > the similar logic on global direct reclaim. > > We should skip scanning global LRU on shrink_zone if soft_limit reclaim > does enough work. This is the first step where we start with counting > the nr_scanned and nr_reclaimed from soft_limit reclaim into global > scan_control. > > Signed-off-by: Ying Han <yinghan@xxxxxxxxxx> > --- > mm/vmscan.c | 16 ++++++++++++++-- > 1 files changed, 14 insertions(+), 2 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index b3a569f..84003cc 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1959,11 +1959,14 @@ restart: > * If a zone is deemed to be full of pinned pages then just give it a light > * scan then give up on it. > */ > -static void shrink_zones(int priority, struct zonelist *zonelist, > +static unsigned long shrink_zones(int priority, struct zonelist *zonelist, > struct scan_control *sc) > { > struct zoneref *z; > struct zone *zone; > + unsigned long nr_soft_reclaimed; > + unsigned long nr_soft_scanned; > + unsigned long total_scanned = 0; > > for_each_zone_zonelist_nodemask(zone, z, zonelist, > gfp_zone(sc->gfp_mask), sc->nodemask) { > @@ -1980,8 +1983,17 @@ static void shrink_zones(int priority, struct zonelist *zonelist, > continue; /* Let kswapd poll it */ > } > > + nr_soft_scanned = 0; > + nr_soft_reclaimed = mem_cgroup_soft_limit_reclaim(zone, > + sc->order, sc->gfp_mask, > + &nr_soft_scanned); > + sc->nr_reclaimed += nr_soft_reclaimed; > + total_scanned += nr_soft_scanned; > + > shrink_zone(priority, zone, sc); > } > + > + return total_scanned; > } > > static bool zone_reclaimable(struct zone *zone) > @@ -2045,7 +2057,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist, > sc->nr_scanned = 0; > if (!priority) > disable_swap_token(); > - shrink_zones(priority, zonelist, sc); > + total_scanned += shrink_zones(priority, zonelist, sc); > /* > * Don't shrink slabs when reclaiming memory from > * over limit cgroups Seems reasonable to me, are you able to see the benefits of setting soft limits and then adding back the stats on global LRU scan if soft limits did a good job? -- Three Cheers, Balbir -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>