On Thu, Oct 05, 2023 at 09:12:25PM +0530, Mukesh Ojha wrote: > > > On 10/5/2023 5:14 PM, Pavan Kondeti wrote: > > On Thu, Oct 05, 2023 at 04:52:20PM +0530, Mukesh Ojha wrote: > > > Sorry for the late reply, was on a long vacation. > > > > > > On 9/14/2023 4:47 AM, Kees Cook wrote: > > > > On Tue, Sep 12, 2023 at 11:18:20AM +0100, Will Deacon wrote: > > > > > On Mon, Sep 11, 2023 at 04:23:50PM +0530, Mukesh Ojha wrote: > > > > > > The reserved memory region for ramoops is assumed to be at a fixed > > > > > > and known location when read from the devicetree. This may not be > > > > > > required for something like Qualcomm's minidump which is interested > > > > > > in knowing addresses of ramoops region but it does not put hard > > > > > > requirement of address being fixed as most of it's SoC does not > > > > > > support warm reset and does not use pstorefs at all instead it has > > > > > > firmware way of collecting ramoops region if it gets to know the > > > > > > address and register it with apss minidump table which is sitting > > > > > > in shared memory region in DDR and firmware will have access to > > > > > > these table during reset and collects it on crash of SoC. > > > > > > > > > > > > So, add the support of reserving ramoops region to be dynamically > > > > > > allocated early during boot if it is request through command line > > > > > > via 'dyn_ramoops_size=' and fill up reserved resource structure and > > > > > > export the structure, so that it can be read by ramoops driver. > > > > > > > > > > > > Signed-off-by: Mukesh Ojha <quic_mojha@xxxxxxxxxxx> > > > > > > --- > > > > > > arch/arm64/mm/init.c | 94 ++++++++++++++++++++++++++++++++++++++++++++++ > > > > > > > > > > Why does this need to be in the arch code? There's absolutely nothing > > > > > arm64-specific here. > > > > > > > > I would agree: this needs to be in ramoops itself, IMO. It should be a > > > > ramoops module argument, too. > > > > > > > > It being unhelpful for systems that don't have an external consumer is > > > > certainly true, but I think it would still make more sense for this > > > > change to live entirely within ramoops. Specifically: you're > > > > implementing a pstore backend behavioral change. In the same way that > > > > patch 10 is putting the "output" side of this into pstore/, I'd expect > > > > the "input" side also in pstore/ > > > > > > How do we reserve memory? are you suggesting to use dma api's for > > > dynamic ramoops ? > > > > > Sharing my thoughts: > > > > Your patch is inspired from how kexec allocate memory for crash kernel > > right? > > Yes. > > > There is a series [1] which moved arch code (ARM64/x86) to > > generic kexec core. Something we should also do as the feedback > > received here. > > > > Coming to how part, we still have to use memblock API to increase the chance > > of allocating contiguous memory. Since PSTORE_RAM can also be > > compiled as a module, we probably need another pstore layer that needs to > > be built statically in kernel to allocate memory using memblock API. > > once slab is available, all memblock API will re-direct to slab > > allocations. This layer can be enabled via ARCH_WANTS_PSTORE_xxx or > > another config that only supports 'y'. PSTORE_RAM can still be a module but > > when this layer is available, it supports dynamic ramoops. Another option > > would be just including this layer in PSTORE RAM module but take away module > > option when this layer is enabled. > > I thought about this but still the caller will be in Arch code, > right ? would that be fine with others ? > The caller is not necessarily to be in the arch code. For ex: mm_core_init()->kfence_alloc_pool_and_metadata() > > > > > > [1] > > https://lore.kernel.org/all/20211020020317.1220-6-thunder.leizhen@xxxxxxxxxx/