The patch titled Subject: mm/hugetlb: get rid of NODEMASK_ALLOC has been added to the -mm tree. Its filename is mm-hugetlb-get-rid-of-nodemask_alloc.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-hugetlb-get-rid-of-nodemask_alloc.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-hugetlb-get-rid-of-nodemask_alloc.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Oscar Salvador <osalvador@xxxxxxx> Subject: mm/hugetlb: get rid of NODEMASK_ALLOC NODEMASK_ALLOC is used to allocate a nodemask bitmap, and it does it by first determining whether it should be allocated on the stack or dynamically, depending on NODES_SHIFT. Right now, it goes the dynamic path whenever the nodemask_t is above 32 bytes. Although we could bump it to a reasonable value, the largest a nodemask_t can get is 128 bytes, so since __nr_hugepages_store_common is called from a rather short stack we can just get rid of the NODEMASK_ALLOC call here. This reduces some code churn and complexity. Link: http://lkml.kernel.org/r/20190402133415.21983-1-osalvador@xxxxxxx Signed-off-by: Oscar Salvador <osalvador@xxxxxxx> Reviewed-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Cc: Alex Ghiti <alex@xxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Jing Xiangfeng <jingxiangfeng@xxxxxxxxxx> Cc: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/hugetlb.c | 36 +++++++++++------------------------- 1 file changed, 11 insertions(+), 25 deletions(-) --- a/mm/hugetlb.c~mm-hugetlb-get-rid-of-nodemask_alloc +++ a/mm/hugetlb.c @@ -2447,44 +2447,30 @@ static ssize_t __nr_hugepages_store_comm unsigned long count, size_t len) { int err; - NODEMASK_ALLOC(nodemask_t, nodes_allowed, GFP_KERNEL | __GFP_NORETRY); + nodemask_t nodes_allowed, *n_mask; - if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) { - err = -EINVAL; - goto out; - } + if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) + return -EINVAL; if (nid == NUMA_NO_NODE) { /* * global hstate attribute */ if (!(obey_mempolicy && - init_nodemask_of_mempolicy(nodes_allowed))) { - NODEMASK_FREE(nodes_allowed); - nodes_allowed = &node_states[N_MEMORY]; - } - } else if (nodes_allowed) { + init_nodemask_of_mempolicy(&nodes_allowed))) + n_mask = &node_states[N_MEMORY]; + else + n_mask = &nodes_allowed; + } else { /* * Node specific request. count adjustment happens in * set_max_huge_pages() after acquiring hugetlb_lock. */ - init_nodemask_of_node(nodes_allowed, nid); - } else { - /* - * Node specific request, but we could not allocate the few - * words required for a node mask. We are unlikely to hit - * this condition. Since we can not pass down the appropriate - * node mask, just return ENOMEM. - */ - err = -ENOMEM; - goto out; + init_nodemask_of_node(&nodes_allowed, nid); + n_mask = &nodes_allowed; } - err = set_max_huge_pages(h, count, nid, nodes_allowed); - -out: - if (nodes_allowed != &node_states[N_MEMORY]) - NODEMASK_FREE(nodes_allowed); + err = set_max_huge_pages(h, count, nid, n_mask); return err ? err : len; } _ Patches currently in -mm which might be from osalvador@xxxxxxx are mmmemory_hotplug-unlock-1gb-hugetlb-on-x86_64.patch mmmemory_hotplug-drop-redundant-hugepage_migration_supported-check.patch mm-hugetlb-get-rid-of-nodemask_alloc.patch