> https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/commit/?id=c65c1877bd6826ce0d9713d76e30a7bed8e49f38 I think the assert is just bogus at least in the early case. early_kmem_cache_node_alloc() says: * No kmalloc_node yet so do it by hand. We know that this is the first * slab on the node for this slabcache. There are no concurrent accesses * possible. Should we do something like the attached patch? (very lightly tested)
--- b/mm/slub.c | 6 ++++++ 1 file changed, 6 insertions(+) diff -puN mm/slub.c~slub-lockdep-workaround mm/slub.c --- a/mm/slub.c~slub-lockdep-workaround 2014-01-14 09:19:22.418942641 -0800 +++ b/mm/slub.c 2014-01-14 09:29:55.441297460 -0800 @@ -2890,7 +2890,13 @@ static void early_kmem_cache_node_alloc( init_kmem_cache_node(n); inc_slabs_node(kmem_cache_node, node, page->objects); + /* + * the lock is for lockdep's sake, not for any actual + * race protection + */ + spin_lock(&n->list_lock); add_partial(n, page, DEACTIVATE_TO_HEAD); + spin_unlock(&n->list_lock); } static void free_kmem_cache_nodes(struct kmem_cache *s) _