Re: [PATCH v9 28/69] mm/mmap: reorganize munmap to use maple states

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Yu Zhao <yuzhao@xxxxxxxxxx> [220615 21:59]:
> On Wed, Jun 15, 2022 at 7:50 PM Liam Howlett <liam.howlett@xxxxxxxxxx> wrote:
> >
> > * Yu Zhao <yuzhao@xxxxxxxxxx> [220615 17:17]:
> >
> > ...
> >
> > > > Yes, I used the same parameters with 512GB of RAM, and the kernel with
> > > > KASAN and other debug options.
> > >
> > > Sorry, Liam. I got the same crash :(
> >
> > Thanks for running this promptly.  I am trying to get my own server
> > setup now.
> >
> > >
> > > 9d27f2f1487a (tag: mm-everything-2022-06-14-19-05, akpm/mm-everything)
> > > 00d4d7b519d6 fs/userfaultfd: Fix vma iteration in mas_for_each() loop
> > > 55140693394d maple_tree: Make mas_prealloc() error checking more generic
> > > 2d7e7c2fcf16 maple_tree: Fix mt_destroy_walk() on full non-leaf non-alloc nodes
> > > 4d4472148ccd maple_tree: Change spanning store to work on larger trees
> > > ea36bcc14c00 test_maple_tree: Add tests for preallocations and large
> > > spanning writes
> > > 0d2aa86ead4f mm/mlock: Drop dead code in count_mm_mlocked_page_nr()
> > >
> > > ==================================================================
> > > BUG: KASAN: slab-out-of-bounds in mab_mas_cp+0x2d9/0x6c0
> > > Write of size 136 at addr ffff88c35a3b9e80 by task stress-ng/19303
> > >
> > > CPU: 66 PID: 19303 Comm: stress-ng Tainted: G S        I       5.19.0-smp-DEV #1
> > > Call Trace:
> > >  <TASK>
> > >  dump_stack_lvl+0xc5/0xf4
> > >  print_address_description+0x7f/0x460
> > >  print_report+0x10b/0x240
> > >  ? mab_mas_cp+0x2d9/0x6c0
> > >  kasan_report+0xe6/0x110
> > >  ? mast_spanning_rebalance+0x2634/0x29b0
> > >  ? mab_mas_cp+0x2d9/0x6c0
> > >  kasan_check_range+0x2ef/0x310
> > >  ? mab_mas_cp+0x2d9/0x6c0
> > >  ? mab_mas_cp+0x2d9/0x6c0
> > >  memcpy+0x44/0x70
> > >  mab_mas_cp+0x2d9/0x6c0
> > >  mas_spanning_rebalance+0x1a3e/0x4f90
> >
> > Does this translate to an inline around line 2997?
> > And then probably around 2808?
> 
> $ ./scripts/faddr2line vmlinux mab_mas_cp+0x2d9
> mab_mas_cp+0x2d9/0x6c0:
> mab_mas_cp at lib/maple_tree.c:1988
> $ ./scripts/faddr2line vmlinux mas_spanning_rebalance+0x1a3e
> mas_spanning_rebalance+0x1a3e/0x4f90:
> mast_cp_to_nodes at lib/maple_tree.c:?
> (inlined by) mas_spanning_rebalance at lib/maple_tree.c:2997
> $ ./scripts/faddr2line vmlinux mas_wr_spanning_store+0x16c5
> mas_wr_spanning_store+0x16c5/0x1b80:
> mas_wr_spanning_store at lib/maple_tree.c:?
> 
> No idea why faddr2line didn't work for the last two addresses. GDB
> seems more reliable.
> 
> (gdb) li *(mab_mas_cp+0x2d9)
> 0xffffffff8226b049 is in mab_mas_cp (lib/maple_tree.c:1988).
> (gdb) li *(mas_spanning_rebalance+0x1a3e)
> 0xffffffff822633ce is in mas_spanning_rebalance (lib/maple_tree.c:2801).
> quit)
> (gdb) li *(mas_wr_spanning_store+0x16c5)
> 0xffffffff8225cfb5 is in mas_wr_spanning_store (lib/maple_tree.c:4030).


Thanks.  I am not having luck recreating it.  I am hitting what looks
like an unrelated issue in the unstable mm, "scheduling while atomic".
I will try the git commit you indicate above.






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux