Re: [PATCH v6 1/4] of: reserved_mem: Restruture how the reserved memory regions are processed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 13, 2024 at 09:05:18AM -0700, Oreoluwa Babatunde wrote:
> 
> On 6/10/2024 2:47 PM, Mark Brown wrote:
> > On Mon, Jun 10, 2024 at 02:34:03PM -0700, Nathan Chancellor wrote:
> >> On Tue, May 28, 2024 at 03:36:47PM -0700, Oreoluwa Babatunde wrote:
> >>> fdt_init_reserved_mem() is also now called from within the
> >>> unflatten_device_tree() function so that this step happens after the
> >>> page tables have been setup.
> >>> Signed-off-by: Oreoluwa Babatunde <quic_obabatun@xxxxxxxxxxx>
> >> I am seeing a warning when booting aspeed_g5_defconfig in QEMU that I
> >> bisected to this change in -next as commit a46cccb0ee2d ("of:
> >> reserved_mem: Restruture how the reserved memory regions are
> >> processed").
> > I'm also seeing issues in -next which I bisected to this commit, on the
> > original Raspberry Pi the cpufreq driver fails to come up and I see
> > (potentially separate?) backtraces:
> >
> > [    0.100390] ------------[ cut here ]------------
> > [    0.100476] WARNING: CPU: 0 PID: 1 at mm/memory.c:2835 __apply_to_page_range+0xd4/0x2c8
> > [    0.100637] Modules linked in:
> > [    0.100665] CPU: 0 PID: 1 Comm: swapper Not tainted 6.10.0-rc2-next-20240607 #1
> > [    0.100692] Hardware name: BCM2835
> > [    0.100705] Call trace: 
> > [    0.100727]  unwind_backtrace from show_stack+0x18/0x1c
> > [    0.100790]  show_stack from dump_stack_lvl+0x38/0x48
> > [    0.100833]  dump_stack_lvl from __warn+0x8c/0xf4
> > [    0.100888]  __warn from warn_slowpath_fmt+0x80/0xbc
> > [    0.100933]  warn_slowpath_fmt from __apply_to_page_range+0xd4/0x2c8
> > [    0.100983]  __apply_to_page_range from apply_to_page_range+0x20/0x28
> > [    0.101027]  apply_to_page_range from __dma_remap+0x58/0x88
> > [    0.101071]  __dma_remap from __alloc_from_contiguous+0x6c/0xa8
> > [    0.101106]  __alloc_from_contiguous from atomic_pool_init+0x9c/0x1c4
> > [    0.101169]  atomic_pool_init from do_one_initcall+0x68/0x158
> > [    0.101223]  do_one_initcall from kernel_init_freeable+0x1ac/0x1f0
> > [    0.101267]  kernel_init_freeable from kernel_init+0x1c/0x140
> > [    0.101309]  kernel_init from ret_from_fork+0x14/0x28
> > [    0.101344] Exception stack(0xdc80dfb0 to 0xdc80dff8)
> > [    0.101369] dfa0:                                     00000000 00000000 00000000 00000000
> > [    0.101393] dfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> > [    0.101414] dfe0: 00000000 00000000 00000000 00000000 00000013 00000000
> > [    0.101428] ---[ end trace 0000000000000000 ]---
> >
> > Full boot log at:
> >
> >    https://lava.sirena.org.uk/scheduler/job/374962
> >
> > You can see the report of cpufreq not being loaded in the log.
> >
> > NFS boots also fail, apparently due to slowness bringing up a Debian
> > userspace which may well be due to cpufreq isues:
> Hi Mark & Nathan,
> 
> Taking a look at this now and will provide a fix soon if
> needed.
> 
> At first glance, it looks like there are a couple of WARN_ON*
> function calls in __apply_to_page_range(). Please could
> you run faddr2line and tell me which of the WARN_ON*
> cases we are hitting?

That shouldn't be needed, right? The line is in the WARNING: mm/memory.c:2835
which, in next-20240607, is: if (WARN_ON_ONCE(pmd_leaf(*pmd))).

Thanks,
Conor.

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux