Currently, the mem_dump_obj() is invoked in call_rcu(), the call_rcu() is maybe invoked in non-preemptive code segment, for object allocated from vmalloc(), the following scenarios may occur: CPU 0 tasks context spin_lock(&vmap_area_lock) Interrupt context call_rcu() mem_dump_obj vmalloc_dump_obj spin_lock(&vmap_area_lock) <--deadlock and for PREEMPT-RT kernel, the spinlock will convert to sleepable lock, so the vmap_area_lock spinlock not allowed to get in non-preemptive code segment. therefore, this commit make the vmalloc_dump_obj() call in a preemptible context. Signed-off-by: Zqiang <qiang1.zhang@xxxxxxxxx> --- v1->v2: Add IS_ENABLED(CONFIG_PREEMPT_RT) check. mm/util.c | 4 +++- mm/vmalloc.c | 4 ++++ 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/mm/util.c b/mm/util.c index 12984e76767e..2b0222a728cc 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1128,7 +1128,9 @@ void mem_dump_obj(void *object) return; if (virt_addr_valid(object)) - type = "non-slab/vmalloc memory"; + type = "non-slab memory"; + else if (is_vmalloc_addr(object)) + type = "vmalloc memory"; else if (object == NULL) type = "NULL pointer"; else if (object == ZERO_SIZE_PTR) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index ccaa461998f3..018e417b12d6 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -4034,6 +4034,10 @@ bool vmalloc_dump_obj(void *object) struct vm_struct *vm; void *objp = (void *)PAGE_ALIGN((unsigned long)object); + if (!is_vmalloc_addr(objp) || ((IS_ENABLED(CONFIG_PREEMPT_RT) && + !preemptible()) || in_interrupt())) + return false; + vm = find_vm_area(objp); if (!vm) return false; -- 2.25.1