When the hugepage parameter is true in vma_alloc_folio(), it indicates that we only try allocation on preferred node if possible for PMD_ORDER, but it could lead to lots of failures for large folio allocation, luckily the hugepage parameter was deprecated since commit ddc1a5cbc05d ("mempolicy: alloc_pages_mpol() for NUMA policy without vma"), so no effect on runtime behavior. Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> --- Found the issue when backport mthp to inner kernel without ddc1a5cbc05d, but for mainline, there is no issue, no clue why hugepage parameter was retained, maybe just kill the parameter for mainline? mm/memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/memory.c b/mm/memory.c index b84443e689a8..89a15858348a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4479,7 +4479,7 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf) gfp = vma_thp_gfp_mask(vma); while (orders) { addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); - folio = vma_alloc_folio(gfp, order, vma, addr, true); + folio = vma_alloc_folio(gfp, order, vma, addr, false); if (folio) { if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) { count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); -- 2.27.0