[to-be-updated] mm-hugetlb_vmemmap-use-bulk-allocator-in-alloc_vmemmap_page_list.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The quilt patch titled
     Subject: mm: hugetlb_vmemmap: use bulk allocator in alloc_vmemmap_page_list()
has been removed from the -mm tree.  Its filename was
     mm-hugetlb_vmemmap-use-bulk-allocator-in-alloc_vmemmap_page_list.patch

This patch was dropped because an updated version will be merged

------------------------------------------------------
From: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx>
Subject: mm: hugetlb_vmemmap: use bulk allocator in alloc_vmemmap_page_list()
Date: Tue, 5 Sep 2023 18:35:08 +0800

It is needed 4095 pages(1G) or 7 pages(2M) to be allocated once in
alloc_vmemmap_page_list(), so let's add a bulk allocator interface
alloc_pages_bulk_list_node() and switch alloc_vmemmap_page_list() to use
it to accelerate page allocation.

Simple test on arm64's qemu with 1G Hugetlb, 870,842ns vs 3,845,252ns,
even if there is a certain fluctuation, it is still a nice improvement.

Link: https://lkml.kernel.org/r/20230905103508.2996474-1-wangkefeng.wang@xxxxxxxxxx
Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx>
Tested-by: Yuan Can <yuancan@xxxxxxxxxx>
Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
Cc: Muchun Song <muchun.song@xxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 include/linux/gfp.h  |    9 +++++++++
 mm/hugetlb_vmemmap.c |    6 ++++++
 2 files changed, 15 insertions(+)

--- a/include/linux/gfp.h~mm-hugetlb_vmemmap-use-bulk-allocator-in-alloc_vmemmap_page_list
+++ a/include/linux/gfp.h
@@ -196,6 +196,15 @@ alloc_pages_bulk_list(gfp_t gfp, unsigne
 }
 
 static inline unsigned long
+alloc_pages_bulk_list_node(gfp_t gfp, int nid, unsigned long nr_pages, struct list_head *list)
+{
+	if (nid == NUMA_NO_NODE)
+		nid = numa_mem_id();
+
+	return __alloc_pages_bulk(gfp, nid, NULL, nr_pages, list, NULL);
+}
+
+static inline unsigned long
 alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array)
 {
 	return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array);
--- a/mm/hugetlb_vmemmap.c~mm-hugetlb_vmemmap-use-bulk-allocator-in-alloc_vmemmap_page_list
+++ a/mm/hugetlb_vmemmap.c
@@ -385,7 +385,13 @@ static int alloc_vmemmap_page_list(unsig
 	unsigned long nr_pages = (end - start) >> PAGE_SHIFT;
 	int nid = page_to_nid((struct page *)start);
 	struct page *page, *next;
+	unsigned long nr_allocated;
 
+	nr_allocated = alloc_pages_bulk_list_node(gfp_mask, nid, nr_pages, list);
+	if (!nr_allocated)
+		return -ENOMEM;
+
+	nr_pages -= nr_allocated;
 	while (nr_pages--) {
 		page = alloc_pages_node(nid, gfp_mask, 0);
 		if (!page)
_

Patches currently in -mm which might be from wangkefeng.wang@xxxxxxxxxx are





[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux