On Fri, 20 Jan 2012 00:40:34 -0800 Greg Thelen <gthelen@xxxxxxxxxx> wrote: > On Fri, Jan 13, 2012 at 12:45 AM, KAMEZAWA Hiroyuki > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index 8b67ccf..4836e8d 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -89,7 +89,6 @@ enum mem_cgroup_stat_index { > > MEM_CGROUP_STAT_FILE_MAPPED, /* # of pages charged as file rss */ > > MEM_CGROUP_STAT_SWAPOUT, /* # of pages, swapped out */ > > MEM_CGROUP_STAT_DATA, /* end of data requires synchronization */ > > - MEM_CGROUP_ON_MOVE, /* someone is moving account between groups */ > > MEM_CGROUP_STAT_NSTATS, > > }; > > > > @@ -279,6 +278,8 @@ struct mem_cgroup { > > * mem_cgroup ? And what type of charges should we move ? > > */ > > unsigned long move_charge_at_immigrate; > > + /* set when a page under this memcg may be moving to other memcg */ > > + atomic_t account_moving; > > /* > > * percpu counter. > > */ > > @@ -1250,20 +1251,27 @@ int mem_cgroup_swappiness(struct mem_cgroup *memcg) > > return memcg->swappiness; > > } > > > > +/* > > + * For quick check, for avoiding looking up memcg, system-wide > > + * per-cpu check is provided. > > + */ > > +DEFINE_PER_CPU(int, mem_cgroup_account_moving); > > Why is this a per-cpu counter? Can this be an single atomic_t > instead, or does cpu hotplug require per-cpu state? In the common > case, when there is no move in progress, then the counter would be > zero and clean in all cpu caches that need it. When moving pages, > mem_cgroup_start_move() would atomic_inc the counter. > Ok, atomic_t will be simple. Thanks, -Kame -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>