On Thu, Aug 01, 2013 at 08:00:07PM +0800, Sha Zhengju wrote: > @@ -6303,6 +6360,49 @@ mem_cgroup_css_online(struct cgroup *cont) > } > > error = memcg_init_kmem(memcg, &mem_cgroup_subsys); > + if (!error) { > + if (!mem_cgroup_in_use()) { > + /* I'm the first non-root memcg, move global stats to root memcg. > + * Memcg creating is serialized by cgroup locks(cgroup_mutex), > + * so the mem_cgroup_in_use() checking is safe. > + * > + * We use global_page_state() to get global page stats, but > + * because of the optimized inc/dec functions in SMP while > + * updating each zone's stats, We may lose some numbers > + * in a stock(zone->pageset->vm_stat_diff) which brings some > + * inaccuracy. But places where kernel use these page stats to > + * steer next decision e.g. dirty page throttling or writeback > + * also use global_page_state(), so here it's enough too. > + */ > + spin_lock(&root_mem_cgroup->pcp_counter_lock); > + root_mem_cgroup->stats_base.count[MEM_CGROUP_STAT_FILE_MAPPED] = > + global_page_state(NR_FILE_MAPPED); > + root_mem_cgroup->stats_base.count[MEM_CGROUP_STAT_FILE_DIRTY] = > + global_page_state(NR_FILE_DIRTY); > + root_mem_cgroup->stats_base.count[MEM_CGROUP_STAT_WRITEBACK] = > + global_page_state(NR_WRITEBACK); > + spin_unlock(&root_mem_cgroup->pcp_counter_lock); > + } If inaccuracies in these counters are okay, why do we go through an elaborate locking scheme that sprinkles memcg callbacks everywhere just to be 100% reliable in the rare case somebody moves memory between cgroups? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>