+ mm-hugetlb_vmemmap-use-bulk-allocator-in-alloc_vmemmap_page_list.patch added to mm-unstable branch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm: hugetlb_vmemmap: use bulk allocator in alloc_vmemmap_page_list()
has been added to the -mm mm-unstable branch.  Its filename is
     mm-hugetlb_vmemmap-use-bulk-allocator-in-alloc_vmemmap_page_list.patch

This patch will shortly appear at
     https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-hugetlb_vmemmap-use-bulk-allocator-in-alloc_vmemmap_page_list.patch

This patch will later appear in the mm-unstable branch at
    git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days

------------------------------------------------------
From: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx>
Subject: mm: hugetlb_vmemmap: use bulk allocator in alloc_vmemmap_page_list()
Date: Tue, 5 Sep 2023 18:35:08 +0800

It is needed 4095 pages(1G) or 7 pages(2M) to be allocated once in
alloc_vmemmap_page_list(), so let's add a bulk allocator interface
alloc_pages_bulk_list_node() and switch alloc_vmemmap_page_list() to use
it to accelerate page allocation.

Simple test on arm64's qemu with 1G Hugetlb, 870,842ns vs 3,845,252ns,
even if there is a certain fluctuation, it is still a nice improvement.

Link: https://lkml.kernel.org/r/20230905103508.2996474-1-wangkefeng.wang@xxxxxxxxxx
Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx>
Tested-by: Yuan Can <yuancan@xxxxxxxxxx>
Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
Cc: Muchun Song <muchun.song@xxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 include/linux/gfp.h  |    9 +++++++++
 mm/hugetlb_vmemmap.c |    6 ++++++
 2 files changed, 15 insertions(+)

--- a/include/linux/gfp.h~mm-hugetlb_vmemmap-use-bulk-allocator-in-alloc_vmemmap_page_list
+++ a/include/linux/gfp.h
@@ -196,6 +196,15 @@ alloc_pages_bulk_list(gfp_t gfp, unsigne
 }
 
 static inline unsigned long
+alloc_pages_bulk_list_node(gfp_t gfp, int nid, unsigned long nr_pages, struct list_head *list)
+{
+	if (nid == NUMA_NO_NODE)
+		nid = numa_mem_id();
+
+	return __alloc_pages_bulk(gfp, nid, NULL, nr_pages, list, NULL);
+}
+
+static inline unsigned long
 alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array)
 {
 	return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array);
--- a/mm/hugetlb_vmemmap.c~mm-hugetlb_vmemmap-use-bulk-allocator-in-alloc_vmemmap_page_list
+++ a/mm/hugetlb_vmemmap.c
@@ -385,7 +385,13 @@ static int alloc_vmemmap_page_list(unsig
 	unsigned long nr_pages = (end - start) >> PAGE_SHIFT;
 	int nid = page_to_nid((struct page *)start);
 	struct page *page, *next;
+	unsigned long nr_allocated;
 
+	nr_allocated = alloc_pages_bulk_list_node(gfp_mask, nid, nr_pages, list);
+	if (!nr_allocated)
+		return -ENOMEM;
+
+	nr_pages -= nr_allocated;
 	while (nr_pages--) {
 		page = alloc_pages_node(nid, gfp_mask, 0);
 		if (!page)
_

Patches currently in -mm which might be from wangkefeng.wang@xxxxxxxxxx are

mm-hugetlb_vmemmap-use-bulk-allocator-in-alloc_vmemmap_page_list.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux