On Thu, 5 Nov 2015 19:18:05 +0300 Vladimir Davydov <vdavydov@xxxxxxxxxxxxx> wrote: > On Thu, Nov 05, 2015 at 04:37:51PM +0100, Jesper Dangaard Brouer wrote: > ... > > @@ -1298,7 +1298,6 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, > > flags &= gfp_allowed_mask; > > kmemcheck_slab_alloc(s, flags, object, slab_ksize(s)); > > kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags); > > - memcg_kmem_put_cache(s); > > kasan_slab_alloc(s, object); > > } > > > > @@ -2557,6 +2556,7 @@ redo: > > memset(object, 0, s->object_size); > > > > slab_post_alloc_hook(s, gfpflags, object); > > + memcg_kmem_put_cache(s); > > Asymmetric - not good IMO. What about passing array of allocated objects > to slab_post_alloc_hook? Then we could leave memcg_kmem_put_cache where > it is now. I.e here we'd have > > slab_post_alloc_hook(s, gfpflags, &object, 1); > > while in kmem_cache_alloc_bulk it'd look like > > slab_post_alloc_hook(s, flags, p, size); > > right before return. In theory a good idea, but we just have to make sure that the compiler can "see" that it can remove the loop if the CONFIG feature is turned off, and that const propagation works for the "1" element case. I'll verify this tomorrow or Monday (busy at a conf yesterday goo.gl/rRTdNL) -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat Author of http://www.iptv-analyzer.org LinkedIn: http://www.linkedin.com/in/brouer -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>