The patch titled Subject: hugetlb: force allocating surplus hugepages on mempolicy allowed nodes has been added to the -mm mm-unstable branch. Its filename is hugetlb-force-allocating-surplus-hugepages-on-mempolicy-allowed-nodes-v2.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/hugetlb-force-allocating-surplus-hugepages-on-mempolicy-allowed-nodes-v2.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Aristeu Rozanski <aris@xxxxxxxxx> Subject: hugetlb: force allocating surplus hugepages on mempolicy allowed nodes Date: Mon, 1 Jul 2024 17:23:43 -0400 v2: - attempt to make the description more clear - prevent unitialized usage of folio in case current process isn't part of any nodes with memory Link: https://lkml.kernel.org/r/20240701212343.GG844599@xxxxxxxxxxxxxxxxx Signed-off-by: Aristeu Rozanski <aris@xxxxxxxxx> Cc: Vishal Moola <vishal.moola@xxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Aristeu Rozanski <aris@xxxxxxxxxx> Cc: Muchun Song <muchun.song@xxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/hugetlb.c | 1 + 1 file changed, 1 insertion(+) --- a/mm/hugetlb.c~hugetlb-force-allocating-surplus-hugepages-on-mempolicy-allowed-nodes-v2 +++ a/mm/hugetlb.c @@ -2631,6 +2631,7 @@ static int gather_surplus_pages(struct h retry: spin_unlock_irq(&hugetlb_lock); for (i = 0; i < needed; i++) { + folio = NULL; for_each_node_mask(node, cpuset_current_mems_allowed) { if (!mbind_nodemask || node_isset(node, *mbind_nodemask)) { folio = alloc_surplus_hugetlb_folio(h, htlb_alloc_mask(h), _ Patches currently in -mm which might be from aris@xxxxxxxxx are hugetlb-force-allocating-surplus-hugepages-on-mempolicy-allowed-nodes-v2.patch