On Wed, 2011-11-30 at 12:14 +0100, Peter Zijlstra wrote: > On Wed, 2011-11-30 at 09:23 +0100, John Kacur wrote: > > > This was complained about in mainline too: > > > > > > https://lkml.org/lkml/2011/10/3/364 > > > > > > There was a fix to a similar bug that Peter pointed out, but this bug > > > doesn't look like it was fixed. > > > > > > Peter? > > Re to the subject, every borkage of the nvidiot binary driver is a > personal victory, I try as hard as possible to increase their pain. > Well, this bug is not caused by nvidiot, but it prevents us from seeing if there's locking issues in nvidiot. Because Thomas tripped over this bug, lockdep shutdown before it could analyze anything further down, including nvidiot too. But then again, maybe the bug Thomas is seeing is in mainline, and nvidiot is helping us find bugs :) > As to the actual subject of the email, see: > > http://article.gmane.org/gmane.linux.kernel.mm/70863/match= Thomas (Schauss), Could you try this patch? I took Peter's patch and ported it to 3.0-rt. Hopefully, I didn't screw it up. -- Steve diff --git a/mm/slab.c b/mm/slab.c index 096bf0a..966a8c4 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -764,6 +764,7 @@ static enum { PARTIAL_AC, PARTIAL_L3, EARLY, + LATE, FULL } g_cpucache_up; @@ -795,7 +796,7 @@ static void init_node_lock_keys(int q) { struct cache_sizes *s = malloc_sizes; - if (g_cpucache_up != FULL) + if (g_cpucache_up < LATE) return; for (s = malloc_sizes; s->cs_size != ULONG_MAX; s++) { @@ -1752,7 +1753,7 @@ void __init kmem_cache_init_late(void) mutex_unlock(&cache_chain_mutex); /* Done! */ - g_cpucache_up = FULL; + g_cpucache_up = LATE; /* Annotate slab for lockdep -- annotate the malloc caches */ init_lock_keys(); -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html