On Mon, May 19, 2014 at 12:08:30PM +0800, Jianyu Zhan wrote: > Currently, we use (zone->managed_pages + KSWAPD_ZONE_BALANCE_GAP_RATIO-1) / > KSWAPD_ZONE_BALANCE_GAP_RATIO to avoid a zero gap value. It's better to > use DIV_ROUND_UP macro for neater code and clear meaning. > > Besides, the gap value is calculated against the per-zone "managed pages", > not "present pages". This patch also corrects the comment and do some > rephrasing. > > Signed-off-by: Jianyu Zhan <nasa4836@xxxxxxxxx> > --- Acked-by: Rafael Aquini <aquini@xxxxxxxxxx> > include/linux/swap.h | 8 ++++---- > mm/vmscan.c | 10 ++++------ > 2 files changed, 8 insertions(+), 10 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 5a14b92..58e1696 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -166,10 +166,10 @@ enum { > #define COMPACT_CLUSTER_MAX SWAP_CLUSTER_MAX > > /* > - * Ratio between the present memory in the zone and the "gap" that > - * we're allowing kswapd to shrink in addition to the per-zone high > - * wmark, even for zones that already have the high wmark satisfied, > - * in order to provide better per-zone lru behavior. We are ok to > + * Ratio between zone->managed_pages and the "gap" that above the per-zone > + * "high_wmark". While balancing nodes, We allow kswapd to shrink zones that > + * do not meet the (high_wmark + gap) watermark, even which already met the > + * high_wmark, in order to provide better per-zone lru behavior. We are ok to > * spend not more than 1% of the memory for this zone balancing "gap". > */ > #define KSWAPD_ZONE_BALANCE_GAP_RATIO 100 > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 32c661d..9ef9f6c 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -2268,9 +2268,8 @@ static inline bool compaction_ready(struct zone *zone, struct scan_control *sc) > * there is a buffer of free pages available to give compaction > * a reasonable chance of completing and allocating the page > */ > - balance_gap = min(low_wmark_pages(zone), > - (zone->managed_pages + KSWAPD_ZONE_BALANCE_GAP_RATIO-1) / > - KSWAPD_ZONE_BALANCE_GAP_RATIO); > + balance_gap = min(low_wmark_pages(zone), DIV_ROUND_UP( > + zone->managed_pages, KSWAPD_ZONE_BALANCE_GAP_RATIO)); > watermark = high_wmark_pages(zone) + balance_gap + (2UL << sc->order); > watermark_ok = zone_watermark_ok_safe(zone, 0, watermark, 0, 0); > > @@ -2891,9 +2890,8 @@ static bool kswapd_shrink_zone(struct zone *zone, > * high wmark plus a "gap" where the gap is either the low > * watermark or 1% of the zone, whichever is smaller. > */ > - balance_gap = min(low_wmark_pages(zone), > - (zone->managed_pages + KSWAPD_ZONE_BALANCE_GAP_RATIO-1) / > - KSWAPD_ZONE_BALANCE_GAP_RATIO); > + balance_gap = min(low_wmark_pages(zone), DIV_ROUND_UP( > + zone->managed_pages, KSWAPD_ZONE_BALANCE_GAP_RATIO)); > > /* > * If there is no low memory pressure or the zone is balanced then no > -- > 2.0.0-rc3 > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>