On Fri, Jul 23, 2010 at 1:03 PM, KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxxxxxxx> wrote: >> * KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxxxxxxx> [2010-07-16 19:13:31]: >> >> > Currently, mem_cgroup_shrink_node_zone() initialize sc.nr_to_reclaim as 0. >> > It mean shrink_zone() only scan 32 pages and immediately return even if >> > it doesn't reclaim any pages. >> > >> > This patch fixes it. >> > >> > Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxxxxxxx> >> > --- >> > mm/vmscan.c | 1 + >> > 1 files changed, 1 insertions(+), 0 deletions(-) >> > >> > diff --git a/mm/vmscan.c b/mm/vmscan.c >> > index 1691ad0..bd1d035 100644 >> > --- a/mm/vmscan.c >> > +++ b/mm/vmscan.c >> > @@ -1932,6 +1932,7 @@ unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *mem, >> > struct zone *zone, int nid) >> > { >> > struct scan_control sc = { >> > + .nr_to_reclaim = SWAP_CLUSTER_MAX, >> > .may_writepage = !laptop_mode, >> > .may_unmap = 1, >> > .may_swap = !noswap, >> >> Could you please do some additional testing on >> >> 1. How far does this push pages (in terms of when limit is hit)? > > 32 pages per mem_cgroup_shrink_node_zone(). > > That said, the algorithm is here. > > 1. call mem_cgroup_largest_soft_limit_node() > calculate largest cgroup > 2. call mem_cgroup_shrink_node_zone() and shrink 32 pages > 3. goto 1 if limit is still exceed. > > If it's not your intention, can you please your intended algorithm? We set it to 0, since we care only about a single page reclaim on hitting the limit. IIRC, in the past we saw an excessive pushback on reclaiming SWAP_CLUSTER_MAX pages, just wanted to check if you are seeing the same behaviour even now after your changes. Balbir -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href