* KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> [2010-06-01 18:27:20]: > From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> > > Now, for checking a memcg is under task-account-moving, we do css_tryget() > against mc.to and mc.from. But this ust complicates things. This patch > makes the check easier. (And I doubt the current code has some races..) > > This patch adds a spinlock to move_charge_struct and guard modification > of mc.to and mc.from. By this, we don't have to think about complicated > races around this not-critical path. > > Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> > --- > mm/memcontrol.c | 48 ++++++++++++++++++++++++++++-------------------- > 1 file changed, 28 insertions(+), 20 deletions(-) > > Index: mmotm-2.6.34-May21/mm/memcontrol.c > =================================================================== > --- mmotm-2.6.34-May21.orig/mm/memcontrol.c > +++ mmotm-2.6.34-May21/mm/memcontrol.c > @@ -268,6 +268,7 @@ enum move_type { > > /* "mc" and its members are protected by cgroup_mutex */ > static struct move_charge_struct { > + spinlock_t lock; /* for from, to, moving_task */ > struct mem_cgroup *from; > struct mem_cgroup *to; > unsigned long precharge; > @@ -276,6 +277,7 @@ static struct move_charge_struct { > struct task_struct *moving_task; /* a task moving charges */ > wait_queue_head_t waitq; /* a waitq for other context */ > } mc = { > + .lock = __SPIN_LOCK_UNLOCKED(mc.lock), > .waitq = __WAIT_QUEUE_HEAD_INITIALIZER(mc.waitq), > }; > > @@ -1076,26 +1078,25 @@ static unsigned int get_swappiness(struc > > static bool mem_cgroup_under_move(struct mem_cgroup *mem) > { > - struct mem_cgroup *from = mc.from; > - struct mem_cgroup *to = mc.to; > + struct mem_cgroup *from; > + struct mem_cgroup *to; > bool ret = false; > - > - if (from == mem || to == mem) > - return true; > - > - if (!from || !to || !mem->use_hierarchy) > - return false; > - > - rcu_read_lock(); > - if (css_tryget(&from->css)) { > - ret = css_is_ancestor(&from->css, &mem->css); > - css_put(&from->css); > - } > - if (!ret && css_tryget(&to->css)) { > - ret = css_is_ancestor(&to->css, &mem->css); > + /* > + * Unlike task_move routines, we access mc.to, mc.from not under > + * mutual execution by cgroup_mutex. Here, we take spinlock instead. ^^^^^ Typo should be exclusion > + */ > + spin_lock_irq(&mc.lock); Why do we use the _irq variant here? -- Three Cheers, Balbir -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>