Hi, On Wed, Jan 17 2024, Alexander Graf wrote: > When we finish populating our memory, we don't want to lose the scratch > region as memory we can use for useful data. Do do that, we mark it as > CMA memory. That means that any allocation within it only happens with > movable memory which we can then happily discard for the next kexec. > > That way we don't lose the scratch region's memory anymore for > allocations after boot. > > Signed-off-by: Alexander Graf <graf@xxxxxxxxxx> > [...] > @@ -2188,6 +2185,16 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end) > } > } > > +static void mark_phys_as_cma(phys_addr_t start, phys_addr_t end) > +{ > + ulong start_pfn = pageblock_start_pfn(PFN_DOWN(start)); > + ulong end_pfn = pageblock_align(PFN_UP(end)); > + ulong pfn; > + > + for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages) > + set_pageblock_migratetype(pfn_to_page(pfn), MIGRATE_CMA); This fails to compile if CONFIG_CMA is disabled. I think you should add it as a dependency for CONFIG_MEMBLOCK_SCRATCH. > +} > + > static unsigned long __init __free_memory_core(phys_addr_t start, > phys_addr_t end) > { > @@ -2249,6 +2256,17 @@ static unsigned long __init free_low_memory_core_early(void) > > memmap_init_reserved_pages(); > > + if (IS_ENABLED(CONFIG_MEMBLOCK_SCRATCH)) { > + /* > + * Mark scratch mem as CMA before we return it. That way we > + * ensure that no kernel allocations happen on it. That means > + * we can reuse it as scratch memory again later. > + */ > + __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, > + MEMBLOCK_SCRATCH, &start, &end, NULL) > + mark_phys_as_cma(start, end); > + } > + > /* > * We need to use NUMA_NO_NODE instead of NODE_DATA(0)->node_id > * because in some case like Node0 doesn't have RAM installed -- Regards, Pratyush Yadav Amazon Development Center Germany GmbH Krausenstr. 38 10117 Berlin Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B Sitz: Berlin Ust-ID: DE 289 237 879 _______________________________________________ kexec mailing list kexec@xxxxxxxxxxxxxxxxxxx http://lists.infradead.org/mailman/listinfo/kexec