On Wed, Jul 02, 2014 at 09:20:20AM -0500, Christoph Lameter wrote: >On Wed, 2 Jul 2014, Wei Yang wrote: > >> My patch is somewhat convoluted since I wanted to preserve the original logic >> and make minimal change. And yes, it looks not that nice to audience. > >Well I was the author of the initial "convoluted" logic. > >> I feel a little hurt by this patch. What I found and worked is gone with this >> patch. > >Ok how about giving this one additional revision. Maybe you can make the >function even easier to read? F.e. the setting of the NULL pointer at the >end of the loop is ugly. Hi, Christoph Here is my refined version, hope this is more friendly to the audience. >From 3f4fdeab600e53fdcbd65c817db3aa560ac16bfb Mon Sep 17 00:00:00 2001 From: Wei Yang <weiyang@xxxxxxxxxxxxxxxxxx> Date: Tue, 24 Jun 2014 15:48:59 +0800 Subject: [PATCH] slub: reduce duplicate creation on the first object When a kmem_cache is created with ctor, each object in the kmem_cache will be initialized before ready to use. While in slub implementation, the first object will be initialized twice. This patch reduces the duplication of initialization of the first object. Fix commit 7656c72b: SLUB: add macros for scanning objects in a slab. Signed-off-by: Wei Yang <weiyang@xxxxxxxxxxxxxxxxxx> --- mm/slub.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index b2b0473..79611d9 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -288,6 +288,10 @@ static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp) for (__p = (__addr); __p < (__addr) + (__objects) * (__s)->size;\ __p += (__s)->size) +#define for_each_object_idx(__p, __idx, __s, __addr, __objects) \ + for (__p = (__addr), __idx = 1; __idx <= __objects;\ + __p += (__s)->size, __idx++) + /* Determine object index from a given position */ static inline int slab_index(void *p, struct kmem_cache *s, void *addr) { @@ -1409,9 +1413,9 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node) { struct page *page; void *start; - void *last; void *p; int order; + int idx; BUG_ON(flags & GFP_SLAB_BUG_MASK); @@ -1432,14 +1436,13 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node) if (unlikely(s->flags & SLAB_POISON)) memset(start, POISON_INUSE, PAGE_SIZE << order); - last = start; - for_each_object(p, s, start, page->objects) { - setup_object(s, page, last); - set_freepointer(s, last, p); - last = p; + for_each_object_idx(p, idx, s, start, page->objects) { + setup_object(s, page, p); + if (likely(idx < page->objects)) + set_freepointer(s, p, p + s->size); + else + set_freepointer(s, p, NULL); } - setup_object(s, page, last); - set_freepointer(s, last, NULL); page->freelist = start; page->inuse = page->objects; -- 1.7.9.5 -- Richard Yang Help you, Help me -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>