On 10/11/24 20:26, Ritesh Harjani (IBM) wrote: > cma_init_reserved_mem() checks base and size alignment with > CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during > early boot when pageblock_order is 0. That means if base and size does > not have pageblock_order alignment, it can cause functional failures > during cma activate area. > > So let's enforce pageblock_order to be non-zero during > cma_init_reserved_mem(). > > Acked-by: David Hildenbrand <david@xxxxxxxxxx> > Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@xxxxxxxxx> > --- > v2 -> v3: Separated the series into 2 as discussed in v2. > [v2]: https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.list@xxxxxxxxx/ > > mm/cma.c | 9 +++++++++ > 1 file changed, 9 insertions(+) > > diff --git a/mm/cma.c b/mm/cma.c > index 3e9724716bad..36d753e7a0bf 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -182,6 +182,15 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, > if (!size || !memblock_is_region_reserved(base, size)) > return -EINVAL; > > + /* > + * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which > + * needs pageblock_order to be initialized. Let's enforce it. > + */ > + if (!pageblock_order) { > + pr_err("pageblock_order not yet initialized. Called during early boot?\n"); > + return -EINVAL; > + } > + > /* ensure minimal alignment required by mm core */ > if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES)) > return -EINVAL; > -- > 2.46.0 > > LGTM, hopefully this comment regarding CMA_MIN_ALIGNMENT_BYTES alignment requirement will also probably remind us, to drop this new check in case CMA_MIN_ALIGNMENT_BYTES no longer depends on pageblock_order later. Reviewed-by: Anshuman Khandual <anshuman.khandual@xxxxxxx>