On Tue, 2011-11-29 at 15:31 +0100, John Kacur wrote: > On Mon, 28 Nov 2011, John Kacur wrote: > > Hmm, I think I see how this can happen. > > > > cache_flusharray() > > spin_lock(&l3->list_lock); > > free_block(cachep, ac->entry, batchcount, node); > > slab_destroy() > > kmem_cache_free() > > __cache_free() > > cache_flusharray() > > > > Could you try the following patch to see if it gets rid of your lockdep > splat? (plan to neaten it up and send it to lkml if it works for you.) > > >From 29bf37fc62098bc87960e78f365083d9f52cf36a Mon Sep 17 00:00:00 2001 > From: John Kacur <jkacur@xxxxxxxxxx> > Date: Tue, 29 Nov 2011 15:17:54 +0100 > Subject: [PATCH] Drop lock in free_block before calling slab_destroy to prevent lockdep splats > > This prevents lockdep splats due to this call chain > cache_flusharray() > spin_lock(&l3->list_lock); > free_block(cachep, ac->entry, batchcount, node); > slab_destroy() > kmem_cache_free() > __cache_free() > cache_flusharray() John, No, this is a false positive, and the code is correct, lockdep just needs to be tweaked. If this was a real bug, then it would have locked up here and not have continued, as spinlocks are not recursive. This was complained about in mainline too: https://lkml.org/lkml/2011/10/3/364 There was a fix to a similar bug that Peter pointed out, but this bug doesn't look like it was fixed. Peter? -- Steve > > Signed-off-by: John Kacur <jkacur@xxxxxxxxxx> > --- > mm/slab.c | 2 ++ > 1 files changed, 2 insertions(+), 0 deletions(-) > > diff --git a/mm/slab.c b/mm/slab.c > index b615658..635e16a 100644 > --- a/mm/slab.c > +++ b/mm/slab.c > @@ -3667,7 +3667,9 @@ static void free_block(struct kmem_cache *cachep, void **objpp, int nr_objects, > * a different cache, refer to comments before > * alloc_slabmgmt. > */ > + spin_unlock(&l3->list_lock); > slab_destroy(cachep, slabp, true); > + spin_lock(&l3->list_lock); > } else { > list_add(&slabp->list, &l3->slabs_free); > } -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html