* KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> [2010-06-01 18:24:06]: > From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> > > mem_cgroup_try_charge() has a big loop (doesn't fits in screee) and seems to be > hard to read. Most of routines are for slow paths. This patch moves codes out > from the loop and make it clear what's done. > > Summary: > - cut out a function to detect a memcg is under acccount move or not. > - cut out a function to wait for the end of moving task acct. > - cut out a main loop('s slow path) as a function and make it clear I prefer the work refactor as compared to cut out, just a minor nit pick on the terminology. > why we retry or quit by return code. > > Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> > --- > mm/memcontrol.c | 244 +++++++++++++++++++++++++++++++++----------------------- > 1 file changed, 145 insertions(+), 99 deletions(-) > > Index: mmotm-2.6.34-May21/mm/memcontrol.c > =================================================================== > --- mmotm-2.6.34-May21.orig/mm/memcontrol.c > +++ mmotm-2.6.34-May21/mm/memcontrol.c > @@ -1072,6 +1072,49 @@ static unsigned int get_swappiness(struc > return swappiness; > } > > +/* A routine for testing mem is not under move_account */ > + > +static bool mem_cgroup_under_move(struct mem_cgroup *mem) > +{ > + struct mem_cgroup *from = mc.from; > + struct mem_cgroup *to = mc.to; > + bool ret = false; > + > + if (from == mem || to == mem) > + return true; > + > + if (!from || !to || !mem->use_hierarchy) > + return false; > + > + rcu_read_lock(); > + if (css_tryget(&from->css)) { > + ret = css_is_ancestor(&from->css, &mem->css); > + css_put(&from->css); > + } > + if (!ret && css_tryget(&to->css)) { > + ret = css_is_ancestor(&to->css, &mem->css); > + css_put(&to->css); > + } > + rcu_read_unlock(); > + return ret; > +} > + > +static bool mem_cgroup_wait_acct_move(struct mem_cgroup *mem) > +{ > + if (mc.moving_task && current != mc.moving_task) { > + if (mem_cgroup_under_move(mem)) { > + DEFINE_WAIT(wait); > + prepare_to_wait(&mc.waitq, &wait, TASK_INTERRUPTIBLE); > + /* moving charge context might have finished. */ > + if (mc.moving_task) > + schedule(); If we sleep with TASK_INTERRUPTIBLE, we should also check for signal_pending() at the end of the schedule and handle it appropriately to cancel the operation. Looks good to me otherwise. -- Three Cheers, Balbir -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>