On Tue, Aug 03, 2010 at 12:47:36PM +0800, Minchan Kim wrote: > On Tue, Aug 3, 2010 at 1:28 PM, Wu Fengguang <fengguang.wu@xxxxxxxxx> wrote: > > On Tue, Aug 03, 2010 at 12:09:18PM +0800, Minchan Kim wrote: > >> On Tue, Aug 3, 2010 at 12:31 PM, Chris Webb <chris@xxxxxxxxxxxx> wrote: > >> > Minchan Kim <minchan.kim@xxxxxxxxx> writes: > >> > > >> >> Another possibility is _zone_reclaim_ in NUMA. > >> >> Your working set has many anonymous page. > >> >> > >> >> The zone_reclaim set priority to ZONE_RECLAIM_PRIORITY. > >> >> It can make reclaim mode to lumpy so it can page out anon pages. > >> >> > >> >> Could you show me /proc/sys/vm/[zone_reclaim_mode/min_unmapped_ratio] ? > >> > > >> > Sure, no problem. On the machine with the /proc/meminfo I showed earlier, > >> > these are > >> > > >> > # cat /proc/sys/vm/zone_reclaim_mode > >> > 0 > >> > # cat /proc/sys/vm/min_unmapped_ratio > >> > 1 > >> > >> if zone_reclaim_mode is zero, it doesn't swap out anon_pages. > > > > If there are lots of order-1 or higher allocations, anonymous pages > > will be randomly evicted, regardless of their LRU ages. This is > > I thought swapped out page is huge (ie, 3G) even though it enters lumpy mode. > But it's possible. :) > > > probably another factor why the users claim. Are there easy ways to > > confirm this other than patching the kernel? > > cat /proc/buddyinfo can help? Some high order slab caches may show up there :) > Off-topic: > It would be better to add new vmstat of lumpy entrance. I think it's a good debug entry. Although convenient, lumpy reclaim is accompanied with some bad side effects. When something goes wrong, it helps to check the number of lumpy reclaims. Thanks, Fengguang > Pseudo code. > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 0f9f624..d10ff4e 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1641,7 +1641,7 @@ out: > } > } > > -static void set_lumpy_reclaim_mode(int priority, struct scan_control *sc) > +static void set_lumpy_reclaim_mode(int priority, struct scan_control > *sc, struct zone *zone) > { > /* > * If we need a large contiguous chunk of memory, or have > @@ -1654,6 +1654,9 @@ static void set_lumpy_reclaim_mode(int priority, > struct scan_control *sc) > sc->lumpy_reclaim_mode = 1; > else > sc->lumpy_reclaim_mode = 0; > + > + if (sc->lumpy_reclaim_mode) > + inc_zone_state(zone, NR_LUMPY); > } > > /* > @@ -1670,7 +1673,7 @@ static void shrink_zone(int priority, struct zone *zone, > > get_scan_count(zone, sc, nr, priority); > > - set_lumpy_reclaim_mode(priority, sc); > + set_lumpy_reclaim_mode(priority, sc, zone); > > while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] || > nr[LRU_INACTIVE_FILE]) { > > -- > Kind regards, > Minchan Kim -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>