Le mercredi 20 juillet 2011 à 10:34 -0500, Christoph Lameter a écrit : > On Wed, 20 Jul 2011, Eric Dumazet wrote: > > > [PATCH] slab: remove one NR_CPUS dependency > > Ok simple enough. > > Acked-by: Christoph Lameter <cl@xxxxxxxxx> > Thanks Christoph Here is the second patch, also simple and working for me (tested on x86_64, NR_CPUS=4096, on my 2x4x2 machine) Eventually, we could avoid the extra 'array' pointer if NR_CPUS is known to be a small value (<= 16 for example) Note that adding ____cacheline_aligned_in_smp on nodelists[] actually helps performance, as all following fields are readonly after kmem_cache setup. [PATCH] slab: shrinks sizeof(struct kmem_cache) Reduce high order allocations for some setups. (NR_CPUS=4096 -> we need 64KB per kmem_cache struct) Reported-by: Konstantin Khlebnikov <khlebnikov@xxxxxxxxxx> Signed-off-by: Eric Dumazet <eric.dumazet@xxxxxxxxx> CC: Pekka Enberg <penberg@xxxxxxxxxx> CC: Christoph Lameter <cl@xxxxxxxxx> CC: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/slab_def.h | 4 ++-- mm/slab.c | 10 ++++++---- 2 files changed, 8 insertions(+), 6 deletions(-) diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index 83203ae..abedd8e 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -51,7 +51,7 @@ struct kmem_cache { /* 1) per-cpu data, touched during every alloc/free */ - struct array_cache *array[NR_CPUS]; + struct array_cache **array; /* 2) Cache tunables. Protected by cache_chain_mutex */ unsigned int batchcount; unsigned int limit; @@ -118,7 +118,7 @@ struct kmem_cache { * We still use [MAX_NUMNODES] and not [1] or [0] because cache_cache * is statically defined, so we reserve the max number of nodes. */ - struct kmem_list3 *nodelists[MAX_NUMNODES]; + struct kmem_list3 *nodelists[MAX_NUMNODES] ____cacheline_aligned_in_smp; /* * Do not add fields after nodelists[] */ diff --git a/mm/slab.c b/mm/slab.c index d96e223..f951015 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -574,7 +574,9 @@ static struct arraycache_init initarray_generic = { {0, BOOT_CPUCACHE_ENTRIES, 1, 0} }; /* internal cache of cache description objs */ +static struct array_cache *array_cache_cache[NR_CPUS]; static struct kmem_cache cache_cache = { + .array = array_cache_cache, .batchcount = 1, .limit = BOOT_CPUCACHE_ENTRIES, .shared = 1, @@ -1492,11 +1494,10 @@ void __init kmem_cache_init(void) cache_cache.nodelists[node] = &initkmem_list3[CACHE_CACHE + node]; /* - * struct kmem_cache size depends on nr_node_ids, which - * can be less than MAX_NUMNODES. + * struct kmem_cache size depends on nr_node_ids & nr_cpu_ids */ - cache_cache.buffer_size = offsetof(struct kmem_cache, nodelists) + - nr_node_ids * sizeof(struct kmem_list3 *); + cache_cache.buffer_size = offsetof(struct kmem_cache, nodelists[nr_node_ids]) + + nr_cpu_ids * sizeof(struct array_cache *); #if DEBUG cache_cache.obj_size = cache_cache.buffer_size; #endif @@ -2308,6 +2309,7 @@ kmem_cache_create (const char *name, size_t size, size_t align, if (!cachep) goto oops; + cachep->array = (struct array_cache **)&cachep->nodelists[nr_node_ids]; #if DEBUG cachep->obj_size = size; -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>