Current implementation of bootstrap() is not sufficient for kmem_cache and kmem_cache_node. First, for kmem_cache. bootstrap() call kmem_cache_zalloc() at first. When kmem_cache_zalloc() is called, kmem_cache's slab is moved to cpu slab for satisfying kmem_cache allocation request. In current implementation, we only consider n->partial slabs, so, we miss this cpu slab for kmem_cache. Second, for kmem_cache_node. When slab_state = PARTIAL, create_boot_cache() is called. And then, kmem_cache_node's slab is moved to cpu slab for satisfying kmem_cache_node allocation request. So, we also miss this slab. These didn't make any error previously, because we normally don't free objects which comes from kmem_cache's first slab and kmem_cache_node's. Problem will be solved if we consider a cpu slab in bootstrap(). This patch implement it. Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> diff --git a/mm/slub.c b/mm/slub.c index abef30e..830348b 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3613,11 +3613,22 @@ static int slab_memory_callback(struct notifier_block *self, static struct kmem_cache * __init bootstrap(struct kmem_cache *static_cache) { + int cpu; int node; struct kmem_cache *s = kmem_cache_zalloc(kmem_cache, GFP_NOWAIT); memcpy(s, static_cache, kmem_cache->object_size); + for_each_possible_cpu(cpu) { + struct kmem_cache_cpu *c; + struct page *p; + + c = per_cpu_ptr(s->cpu_slab, cpu); + p = c->page; + if (p) + p->slab_cache = s; + } + for_each_node_state(node, N_NORMAL_MEMORY) { struct kmem_cache_node *n = get_node(s, node); struct page *p; -- 1.7.9.5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>