On Fri, 2012-07-20 at 12:19 +0900, Kamezawa Hiroyuki wrote: > > When I added batching, I didn't touch page-reclaim path because it delays > res_counter_uncharge() and make more threads run into page reclaim. > But, from above score, bactching seems required. > > And because of current design of per-zone-per-memcg-LRU, batching > works very very well....all lru pages shrink_page_list() scans are on > the same memcg. > > BTW, it's better to show 'how much improved' in patch description.. I didn't put the specific improvement in patch description as the performance change is specific to my machine and benchmark and improvement could be variable for others. However, I did include the specific number in the body of my message. Hope that is enough. > > > > --- > > Signed-off-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx> > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index 33dc256..aac5672 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -779,6 +779,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, > > > > cond_resched(); > > > > + mem_cgroup_uncharge_start(); > > while (!list_empty(page_list)) { > > enum page_references references; > > struct address_space *mapping; > > @@ -1026,6 +1027,7 @@ keep_lumpy: > > > > list_splice(&ret_pages, page_list); > > count_vm_events(PGACTIVATE, pgactivate); > > + mem_cgroup_uncharge_end(); > > I guess placing mem_cgroup_uncharge_end() just after the loop may be better looking. I initially though of doing that. I later pushed the statement down to after list_splice(&ret_pages, page_list) as that's when the page reclaim is actually completed. It probably doesn't matter one way or the other. I can move it to just after the loop if people think that's better. Thanks for reviewing the change. Tim -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>