On Tue, Jun 19, 2018 at 8:19 AM Jason A. Donenfeld <Jason@xxxxxxxxx> wrote: > > On Tue, Jun 19, 2018 at 5:08 PM Shakeel Butt <shakeelb@xxxxxxxxxx> wrote: > > > > Are you using SLAB or SLUB? We stress kernel pretty heavily, but with > > > > SLAB, and I suspect Shakeel may also be using SLAB. So if you are > > > > using SLUB, there is significant chance that it's a bug in the SLUB > > > > part of the change. > > > > > > Nice intuition; I am indeed using SLUB rather than SLAB... > > > > > > > Can you try once with SLAB? Just to make sure that it is SLUB specific. > > Sorry, I meant to mention that earlier. I tried with SLAB; the crash > does not occur. This appears to be SLUB-specific. Jason, can you try the following patch? --- mm/slub.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/mm/slub.c b/mm/slub.c index a3b8467c14af..746cfe4515c2 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3673,9 +3673,17 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) bool __kmem_cache_empty(struct kmem_cache *s) { + int cpu; int node; struct kmem_cache_node *n; + for_each_online_cpu(cpu) { + struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu); + + if (c->page || slub_percpu_partial(c)) + return false; + } + for_each_kmem_cache_node(s, node, n) if (n->nr_partial || slabs_node(s, node)) return false; -- 2.18.0.rc1.244.gcf134e6275-goog