Hi Christoph, On Wed, Nov 05, 2014 at 12:43:31PM -0600, Christoph Lameter wrote: > On Mon, 3 Nov 2014, Vladimir Davydov wrote: > > > +static __always_inline void slab_free(struct kmem_cache *cachep, void *objp); > > + > > static __always_inline void * > > slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, > > unsigned long caller) > > @@ -3185,6 +3187,10 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, > > kmemcheck_slab_alloc(cachep, flags, ptr, cachep->object_size); > > if (unlikely(flags & __GFP_ZERO)) > > memset(ptr, 0, cachep->object_size); > > + if (unlikely(memcg_kmem_recharge_slab(ptr, flags))) { > > + slab_free(cachep, ptr); > > + ptr = NULL; > > + } > > } > > > > return ptr; > > @@ -3250,6 +3256,10 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) > > kmemcheck_slab_alloc(cachep, flags, objp, cachep->object_size); > > if (unlikely(flags & __GFP_ZERO)) > > memset(objp, 0, cachep->object_size); > > + if (unlikely(memcg_kmem_recharge_slab(objp, flags))) { > > + slab_free(cachep, objp); > > + objp = NULL; > > + } > > } > > > > Please do not add code to the hotpaths if its avoidable. Can you charge > the full slab only when allocated please? I call memcg_kmem_recharge_slab only on alloc path. Free path isn't touched. The overhead added is one function call. The function only reads and compares two pointers under RCU most of time. This is comparable to the overhead introduced by memcg_kmem_get_cache, which is called in slab_alloc/slab_alloc_node earlier. Anyways, if you think this is unacceptable, I don't mind dropping the whole patch set and thinking more on how to fix this per-memcg caches trickery. What do you think? Thanks, Vladimir -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>