David Hildenbrand <david@xxxxxxxxxx> writes: > On 08.10.24 15:27, Ritesh Harjani (IBM) wrote: >> During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE, >> since pageblock_order is still zero and it gets initialized >> later during paging_init() e.g. >> paging_init() -> free_area_init() -> set_pageblock_order(). >> >> One such use case is - >> early_setup() -> early_init_devtree() -> fadump_reserve_mem() >> >> This causes CMA memory alignment check to be bypassed in >> cma_init_reserved_mem(). Then later cma_activate_area() can hit >> a VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memory >> area was not pageblock_order aligned. >> >> Instead of fixing it locally for fadump case on PowerPC, I believe >> this should be fixed for CMA_MIN_ALIGNMENT_BYTES. > > I think we should add a way to catch the usage of > CMA_MIN_ALIGNMENT_BYTES before it actually has meaning (before > pageblock_order was set) Maybe by enforcing that the pageblock_order should not be zero where we do the alignment check then? i.e. in cma_init_reserved_mem() diff --git a/mm/cma.c b/mm/cma.c index 3e9724716bad..36d753e7a0bf 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -182,6 +182,15 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, if (!size || !memblock_is_region_reserved(base, size)) return -EINVAL; + /* + * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which + * needs pageblock_order to be initialized. Let's enforce it. + */ + if (!pageblock_order) { + pr_err("pageblock_order not yet initialized. Called during early boot?\n"); + return -EINVAL; + } + /* ensure minimal alignment required by mm core */ if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES)) return -EINVAL; > and fix the PowerPC usage by reshuffling the > code accordingly. Ok. I will submit a v2 with the above patch incldued. Thanks for the review! -ritesh