On Tue, Jun 24, 2014 at 04:38:41PM +0900, Joonsoo Kim wrote: > On Fri, Jun 13, 2014 at 12:38:22AM +0400, Vladimir Davydov wrote: > > @@ -3462,6 +3474,17 @@ static inline void __cache_free(struct kmem_cache *cachep, void *objp, > > > > kmemcheck_slab_free(cachep, objp, cachep->object_size); > > > > +#ifdef CONFIG_MEMCG_KMEM > > + if (unlikely(!ac)) { > > + int nodeid = page_to_nid(virt_to_page(objp)); > > + > > + spin_lock(&cachep->node[nodeid]->list_lock); > > + free_block(cachep, &objp, 1, nodeid); > > + spin_unlock(&cachep->node[nodeid]->list_lock); > > + return; > > + } > > +#endif > > + > > And, please document intention of this code. :) Sure. > And, you said that this way of implementation would be slow because > there could be many object in dead caches and this implementation > needs node spin_lock on each object freeing. Is it no problem now? It may be :( > If you have any performance data about this implementation and > alternative one, could you share it? I haven't (shame on me!). I'll do some testing today and send you the results. Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>