On Mon, Sep 09, 2019 at 12:10:16AM -0600, Yu Zhao wrote: > If we are already under list_lock, don't call kmalloc(). Otherwise we > will run into deadlock because kmalloc() also tries to grab the same > lock. > > Instead, allocate pages directly. Given currently page->objects has > 15 bits, we only need 1 page. We may waste some memory but we only do > so when slub debug is on. > > WARNING: possible recursive locking detected > -------------------------------------------- > mount-encrypted/4921 is trying to acquire lock: > (&(&n->list_lock)->rlock){-.-.}, at: ___slab_alloc+0x104/0x437 > > but task is already holding lock: > (&(&n->list_lock)->rlock){-.-.}, at: __kmem_cache_shutdown+0x81/0x3cb > > other info that might help us debug this: > Possible unsafe locking scenario: > > CPU0 > ---- > lock(&(&n->list_lock)->rlock); > lock(&(&n->list_lock)->rlock); > > *** DEADLOCK *** > > Signed-off-by: Yu Zhao <yuzhao@xxxxxxxxxx> Looks sane to me: Acked-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> -- Kirill A. Shutemov