Hi, On 9/4/23 20:08, Joel Fernandes (Google) wrote: > It is unsafe to dump vmalloc area information when trying to do so from > some contexts. Add a safer trylock version of the same function to do a > best-effort VMA finding and use it from vmalloc_dump_obj(). I was a bit confused by the subject which suggests a new function is added, but it seems open-coded in its only caller. I assume it's due to evolution of the series. Something like: mm/vmalloc: use trylock for vmap_area_lock in vmalloc_dump_obj() ? I also notice it's trying hard to copy everything from "vm" to temporary variables before unlocking, presumably to prevent use-after-free, so should that be also mentioned in the changelog? > [applied test robot feedback on unused function fix.] > [applied Uladzislau feedback on locking.] > > Reported-by: Zhen Lei <thunder.leizhen@xxxxxxxxxxxxxxx> > Cc: Paul E. McKenney <paulmck@xxxxxxxxxx> > Cc: rcu@xxxxxxxxxxxxxxx > Cc: Zqiang <qiang.zhang1211@xxxxxxxxx> > Reviewed-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx> > Fixes: 98f180837a89 ("mm: Make mem_dump_obj() handle vmalloc() memory") > Cc: stable@xxxxxxxxxxxxxxx > Signed-off-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx> > --- > mm/vmalloc.c | 26 ++++++++++++++++++++++---- > 1 file changed, 22 insertions(+), 4 deletions(-) > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index 93cf99aba335..2c6a0e2ff404 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -4274,14 +4274,32 @@ void pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms) > #ifdef CONFIG_PRINTK > bool vmalloc_dump_obj(void *object) > { > - struct vm_struct *vm; > void *objp = (void *)PAGE_ALIGN((unsigned long)object); > + const void *caller; > + struct vm_struct *vm; > + struct vmap_area *va; > + unsigned long addr; > + unsigned int nr_pages; > > - vm = find_vm_area(objp); > - if (!vm) > + if (!spin_trylock(&vmap_area_lock)) > + return false; > + va = __find_vmap_area((unsigned long)objp, &vmap_area_root); > + if (!va) { > + spin_unlock(&vmap_area_lock); > return false; > + } > + > + vm = va->vm; > + if (!vm) { > + spin_unlock(&vmap_area_lock); > + return false; > + } > + addr = (unsigned long)vm->addr; > + caller = vm->caller; > + nr_pages = vm->nr_pages; > + spin_unlock(&vmap_area_lock); > pr_cont(" %u-page vmalloc region starting at %#lx allocated at %pS\n", > - vm->nr_pages, (unsigned long)vm->addr, vm->caller); > + nr_pages, addr, caller); > return true; > } > #endif