Currently, the mem_dump_obj() is invoked in call_rcu(), the call_rcu() is maybe invoked in non-preemptive code segment, for object allocated from vmalloc(), the following scenarios may occur: CPU 0 tasks context spin_lock(&vmap_area_lock) Interrupt context call_rcu() mem_dump_obj vmalloc_dump_obj spin_lock(&vmap_area_lock) <--deadlock and for PREEMPT-RT kernel, the spinlock will convert to sleepable lock, it also make vmap_area_lock spinlock can not acquire in non-preemptive code segment. therefore, this commit make the vmalloc_dump_obj() call in a preemptible context. Signed-off-by: Zqiang <qiang1.zhang@xxxxxxxxx> --- mm/util.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/mm/util.c b/mm/util.c index 12984e76767e..465f8b8824ca 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1124,8 +1124,12 @@ void mem_dump_obj(void *object) return; } - if (vmalloc_dump_obj(object)) - return; + if (is_vmalloc_addr(object)) { + if (preemptible() && vmalloc_dump_obj(object)) + return; + type = "vmalloc memory"; + goto end; + } if (virt_addr_valid(object)) type = "non-slab/vmalloc memory"; @@ -1135,7 +1139,7 @@ void mem_dump_obj(void *object) type = "zero-size pointer"; else type = "non-paged memory"; - +end: pr_cont(" %s\n", type); } EXPORT_SYMBOL_GPL(mem_dump_obj); -- 2.25.1