> On Apr 5, 2024, at 00:25, Frank van der Linden <fvdl@xxxxxxxxxx> wrote: > > The hugetlb_cma code passes 0 in the order_per_bit argument to > cma_declare_contiguous_nid (the alignment, computed using the > page order, is correctly passed in). > > This causes a bit in the cma allocation bitmap to always represent > a 4k page, making the bitmaps potentially very large, and slower. > > So, correctly pass in the order instead. > > Signed-off-by: Frank van der Linden <fvdl@xxxxxxxxxx> > Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx> > Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma") > --- > mm/hugetlb.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 23ef240ba48a..6dc62d8b2a3a 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -7873,9 +7873,9 @@ void __init hugetlb_cma_reserve(int order) > * huge page demotion. > */ > res = cma_declare_contiguous_nid(0, size, 0, > - PAGE_SIZE << HUGETLB_PAGE_ORDER, > - 0, false, name, > - &hugetlb_cma[nid], nid); > + PAGE_SIZE << HUGETLB_PAGE_ORDER, > + HUGETLB_PAGE_ORDER, false, name, IIUC, we could make the optimization further to change order_per_bit to 'MAX_PAGE_ORDER + 1' since only gigantic hugetlb pages could allocated from the CMA pool meaning any gigantic page is greater than or equal to the size of two to the power of 'MAX_PAGE_ORDER + 1'. Thanks. > + &hugetlb_cma[nid], nid); > if (res) { > pr_warn("hugetlb_cma: reservation failed: err %d, node %d", > res, nid); > -- > 2.44.0.478.gd926399ef9-goog >