On Wed, 6 Sept 2023 at 16:09, Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote: > > > The quilt patch titled > Subject: mm/vmalloc: add a safer version of find_vm_area() for debug > has been removed from the -mm tree. Its filename was > mm-vmalloc-add-a-safer-version-of-find_vm_area-for-debug.patch > > This patch was dropped because it was merged into the mm-hotfixes-stable branch > of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Hmm, I had outstanding review on this :/ I guess I will have to send a follow up patch to address those concerns... > > ------------------------------------------------------ > From: "Joel Fernandes (Google)" <joel@xxxxxxxxxxxxxxxxx> > Subject: mm/vmalloc: add a safer version of find_vm_area() for debug > Date: Mon, 4 Sep 2023 18:08:04 +0000 > > It is unsafe to dump vmalloc area information when trying to do so from > some contexts. Add a safer trylock version of the same function to do a > best-effort VMA finding and use it from vmalloc_dump_obj(). > > [applied test robot feedback on unused function fix.] > [applied Uladzislau feedback on locking.] > Link: https://lkml.kernel.org/r/20230904180806.1002832-1-joel@xxxxxxxxxxxxxxxxx > Fixes: 98f180837a89 ("mm: Make mem_dump_obj() handle vmalloc() memory") > Signed-off-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx> > Reviewed-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx> > Reported-by: Zhen Lei <thunder.leizhen@xxxxxxxxxxxxxxx> > Cc: Paul E. McKenney <paulmck@xxxxxxxxxx> > Cc: Zqiang <qiang.zhang1211@xxxxxxxxx> > Cc: <stable@xxxxxxxxxxxxxxx> > Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > --- > > mm/vmalloc.c | 26 ++++++++++++++++++++++---- > 1 file changed, 22 insertions(+), 4 deletions(-) > > --- a/mm/vmalloc.c~mm-vmalloc-add-a-safer-version-of-find_vm_area-for-debug > +++ a/mm/vmalloc.c > @@ -4278,14 +4278,32 @@ void pcpu_free_vm_areas(struct vm_struct > #ifdef CONFIG_PRINTK > bool vmalloc_dump_obj(void *object) > { > - struct vm_struct *vm; > void *objp = (void *)PAGE_ALIGN((unsigned long)object); > + const void *caller; > + struct vm_struct *vm; > + struct vmap_area *va; > + unsigned long addr; > + unsigned int nr_pages; > + > + if (!spin_trylock(&vmap_area_lock)) > + return false; > + va = __find_vmap_area((unsigned long)objp, &vmap_area_root); > + if (!va) { > + spin_unlock(&vmap_area_lock); > + return false; > + } > > - vm = find_vm_area(objp); > - if (!vm) > + vm = va->vm; > + if (!vm) { > + spin_unlock(&vmap_area_lock); > return false; > + } > + addr = (unsigned long)vm->addr; > + caller = vm->caller; > + nr_pages = vm->nr_pages; > + spin_unlock(&vmap_area_lock); > pr_cont(" %u-page vmalloc region starting at %#lx allocated at %pS\n", > - vm->nr_pages, (unsigned long)vm->addr, vm->caller); > + nr_pages, addr, caller); > return true; > } > #endif > _ > > Patches currently in -mm which might be from joel@xxxxxxxxxxxxxxxxx are > > -- Lorenzo Stoakes https://ljs.io