On Fri, Jan 09, 2015 at 09:14:00PM -0500, Johannes Weiner wrote: > The initialization code for the per-cpu charge stock and the soft > limit tree is compact enough to inline it into mem_cgroup_init(). > > Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx> > --- > mm/memcontrol.c | 57 ++++++++++++++++++++++++--------------------------------- > 1 file changed, 24 insertions(+), 33 deletions(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index aad254b30708..f66bb8f83ac9 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c [...] > @@ -5927,10 +5896,32 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage, > */ > static int __init mem_cgroup_init(void) > { > + int cpu, nid; > + > hotcpu_notifier(memcg_cpu_hotplug_callback, 0); > + > + for_each_possible_cpu(cpu) > + INIT_WORK(&per_cpu_ptr(&memcg_stock, cpu)->work, > + drain_local_stock); > + > + for_each_node(nid) { > + struct mem_cgroup_tree_per_node *rtpn; > + int zone; > + > + rtpn = kzalloc_node(sizeof(*rtpn), GFP_KERNEL, nid); I'd like to see BUG_ON(!rtpn) here, just for clarity. Not critical though. Reviewed-by: Vladimir Davydov <vdavydov@xxxxxxxxxxxxx> > + > + for (zone = 0; zone < MAX_NR_ZONES; zone++) { > + struct mem_cgroup_tree_per_zone *rtpz; > + > + rtpz = &rtpn->rb_tree_per_zone[zone]; > + rtpz->rb_root = RB_ROOT; > + spin_lock_init(&rtpz->lock); > + } > + soft_limit_tree.rb_tree_per_node[nid] = rtpn; > + } > + > enable_swap_cgroup(); > - mem_cgroup_soft_limit_tree_init(); > - memcg_stock_init(); > + > return 0; > } > subsys_initcall(mem_cgroup_init); > -- > 2.2.0 > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>