On Wed, May 25, 2011 at 10:19 PM, KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: > From: Ying Han <yinghan@xxxxxxxxxx> > > The number of reclaimable pages per zone is an useful information for > controling memory reclaim schedule. This patch exports it. > > Changelog v2->v3: > - added comments. > > Signed-off-by: Ying Han <yinghan@xxxxxxxxxx> > Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> > --- > include/linux/memcontrol.h | 2 ++ > mm/memcontrol.c | 24 ++++++++++++++++++++++++ > 2 files changed, 26 insertions(+) > > Index: memcg_async/mm/memcontrol.c > =================================================================== > --- memcg_async.orig/mm/memcontrol.c > +++ memcg_async/mm/memcontrol.c > @@ -1240,6 +1240,30 @@ static unsigned long mem_cgroup_nr_lru_p > } > #endif /* CONFIG_NUMA */ > > +/** > + * mem_cgroup_zone_reclaimable_pages > + * @memcg: the memcg > + * @nid : node index to be checked. > + * @zid : zone index to be checked. > + * > + * This function returns the number reclaimable pages on a zone for given memcg. > + * Reclaimable page includes file caches and anonymous pages if swap is > + * avaliable and never includes unevictable pages. > + */ > +unsigned long mem_cgroup_zone_reclaimable_pages(struct mem_cgroup *memcg, > + int nid, int zid) > +{ > + unsigned long nr; > + struct mem_cgroup_per_zone *mz = mem_cgroup_zoneinfo(memcg, nid, zid); > + > + nr = MEM_CGROUP_ZSTAT(mz, NR_ACTIVE_FILE) + > + MEM_CGROUP_ZSTAT(mz, NR_ACTIVE_FILE); > + if (nr_swap_pages > 0) > + nr += MEM_CGROUP_ZSTAT(mz, NR_ACTIVE_ANON) + > + MEM_CGROUP_ZSTAT(mz, NR_INACTIVE_ANON); > + return nr; > +} > + > struct zone_reclaim_stat *mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg, > struct zone *zone) > { > Index: memcg_async/include/linux/memcontrol.h > =================================================================== > --- memcg_async.orig/include/linux/memcontrol.h > +++ memcg_async/include/linux/memcontrol.h > @@ -109,6 +109,8 @@ extern void mem_cgroup_end_migration(str > */ > int mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg); > int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg); > +unsigned long > +mem_cgroup_zone_reclaimable_pages(struct mem_cgroup *memcg, int nid, int zid); > int mem_cgroup_select_victim_node(struct mem_cgroup *memcg); > unsigned long mem_cgroup_zone_nr_lru_pages(struct mem_cgroup *memcg, > struct zone *zone, > > Again, please apply the patch: diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 6a52699..0b88d71 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1217,7 +1217,7 @@ unsigned long mem_cgroup_zone_reclaimable_pages(struct mem_cgroup *memcg, struct mem_cgroup_per_zone *mz = mem_cgroup_zoneinfo(memcg, nid, zid); nr = MEM_CGROUP_ZSTAT(mz, NR_ACTIVE_FILE) + - MEM_CGROUP_ZSTAT(mz, NR_ACTIVE_FILE); + MEM_CGROUP_ZSTAT(mz, NR_INACTIVE_FILE); if (nr_swap_pages > 0) nr += MEM_CGROUP_ZSTAT(mz, NR_ACTIVE_ANON) + MEM_CGROUP_ZSTAT(mz, NR_INACTIVE_ANON); Also, you need to move this to up since patch 1/10 needs this. --Ying -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href