The patch titled Subject: mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY has been added to the -mm tree. Its filename is mm-hugetlb-add-support-for-mempolicy-mpol_preferred_many.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-hugetlb-add-support-for-mempolicy-mpol_preferred_many.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-hugetlb-add-support-for-mempolicy-mpol_preferred_many.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Ben Widawsky <ben.widawsky@xxxxxxxxx> Subject: mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY Implement the missing huge page allocation functionality while obeying the preferred node semantics. This is similar to the implementation for general page allocation, as it uses a fallback mechanism to try multiple preferred nodes first, and then all other nodes. [akpm: fix compling issue when merging with other hugetlb patch] [Thanks to 0day bot for catching the missing #ifdef CONFIG_NUMA issue] Link: https://lore.kernel.org/r/20200630212517.308045-12-ben.widawsky@xxxxxxxxx Link: https://lkml.kernel.org/r/1627970362-61305-4-git-send-email-feng.tang@xxxxxxxxx Suggested-by: Michal Hocko <mhocko@xxxxxxxx> Signed-off-by: Ben Widawsky <ben.widawsky@xxxxxxxxx> Co-developed-by: Feng Tang <feng.tang@xxxxxxxxx> Signed-off-by: Feng Tang <feng.tang@xxxxxxxxx> Cc: Andi Kleen <ak@xxxxxxxxxxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: Dan Williams <dan.j.williams@xxxxxxxxx> Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Huang Ying <ying.huang@xxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Cc: Randy Dunlap <rdunlap@xxxxxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/hugetlb.c | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) --- a/mm/hugetlb.c~mm-hugetlb-add-support-for-mempolicy-mpol_preferred_many +++ a/mm/hugetlb.c @@ -1166,7 +1166,20 @@ static struct page *dequeue_huge_page_vm gfp_mask = htlb_alloc_mask(h); nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask); +#ifdef CONFIG_NUMA + if (mpol->mode == MPOL_PREFERRED_MANY) { + page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask); + if (page) + goto check_reserve; + /* Fallback to all nodes */ + nodemask = NULL; + } +#endif page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask); + +#ifdef CONFIG_NUMA +check_reserve: +#endif if (page && !avoid_reserve && vma_has_reserves(vma, chg)) { SetHPageRestoreReserve(page); h->resv_huge_pages--; @@ -2147,6 +2160,21 @@ struct page *alloc_buddy_huge_page_with_ nodemask_t *nodemask; nid = huge_node(vma, addr, gfp_mask, &mpol, &nodemask); +#ifdef CONFIG_NUMA + if (mpol->mode == MPOL_PREFERRED_MANY) { + gfp_t gfp = gfp_mask | __GFP_NOWARN; + + gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); + page = alloc_surplus_huge_page(h, gfp, nid, nodemask, false); + if (page) { + mpol_cond_put(mpol); + return page; + } + + /* Fallback to all nodes */ + nodemask = NULL; + } +#endif page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask, false); mpol_cond_put(mpol); _ Patches currently in -mm which might be from ben.widawsky@xxxxxxxxx are mm-hugetlb-add-support-for-mempolicy-mpol_preferred_many.patch mm-mempolicy-advertise-new-mpol_preferred_many.patch