Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> 于2021年9月15日周三 上午11:50写道: > > On Thu, 9 Sep 2021 22:16:55 +0800 yaozhenguo <yaozhenguo1@xxxxxxxxx> wrote: > > > We can specify the number of hugepages to allocate at boot. But the > > hugepages is balanced in all nodes at present. In some scenarios, > > we only need hugepages in one node. For example: DPDK needs hugepages > > which are in the same node as NIC. if DPDK needs four hugepages of 1G > > size in node1 and system has 16 numa nodes. We must reserve 64 hugepages > > in kernel cmdline. But, only four hugepages are used. The others should > > be free after boot. If the system memory is low(for example: 64G), it will > > be an impossible task. So, Extending hugepages parameter to support > > specifying hugepages at a specific node. > > For example add following parameter: > > > > hugepagesz=1G hugepages=0:1,1:3 > > > > It will allocate 1 hugepage in node0 and 3 hugepages in node1. > > > > ... > > > > @@ -2842,10 +2843,75 @@ static void __init gather_bootmem_prealloc(void) > > } > > } > > > > +static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid) > > +{ > > + unsigned long i; > > + char buf[32]; > > + > > + for (i = 0; i < h->max_huge_pages_node[nid]; ++i) { > > + if (hstate_is_gigantic(h)) { > > + struct huge_bootmem_page *m; > > + void *addr; > > + > > + addr = memblock_alloc_try_nid_raw( > > + huge_page_size(h), huge_page_size(h), > > + 0, MEMBLOCK_ALLOC_ACCESSIBLE, nid); > > + if (!addr) > > + break; > > + m = addr; > > + BUG_ON(!IS_ALIGNED(virt_to_phys(m), huge_page_size(h))); > > We try very hard to avoid adding BUG calls. Is there any way in which > this code can emit a WARNing then permit the kernel to keep operating? > Maybe we can rewrite it as below: if (WARN(!IS_ALIGNED(virt_to_phys(m), huge_page_size(h)), "HugeTLB: page addr:%p is not aligned\n", m)) break; @Mike, Do you think it's OK? > > + /* > > + * Put them into a private list first because mem_map > > + * is not up yet > > + */ > > + INIT_LIST_HEAD(&m->list); > > + list_add(&m->list, &huge_boot_pages); > > + m->hstate = h; > > + } else { > > + struct page *page; > > + > > + gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; > > + > > + page = alloc_fresh_huge_page(h, gfp_mask, nid, > > + &node_states[N_MEMORY], NULL); > > + if (!page) > > + break; > > + put_page(page); /* free it into the hugepage allocator */ > > + } > > + cond_resched(); > > + } > > + if (i == h->max_huge_pages_node[nid]) > > + return; > > + > > + string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); > > + pr_warn("HugeTLB: allocating %u of page size %s failed node%d. Only allocated %lu hugepages.\n", > > + h->max_huge_pages_node[nid], buf, nid, i); > > + h->max_huge_pages_node[nid] = i; > > + h->max_huge_pages -= (h->max_huge_pages_node[nid] - i); > > +} > > + >