Re: [PATCH resend] mm: hugetlb_vmemmap: use bulk allocator in alloc_vmemmap_page_list()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2023/9/6 11:25, Muchun Song wrote:


On Sep 6, 2023, at 11:13, Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> wrote:



On 2023/9/6 10:47, Matthew Wilcox wrote:
On Tue, Sep 05, 2023 at 06:35:08PM +0800, Kefeng Wang wrote:
It is needed 4095 pages(1G) or 7 pages(2M) to be allocated once in
alloc_vmemmap_page_list(), so let's add a bulk allocator varietas
alloc_pages_bulk_list_node() and switch alloc_vmemmap_page_list()
to use it to accelerate page allocation.
Argh, no, please don't do this.
Iterating a linked list is _expensive_.  It is about 10x quicker to
iterate an array than a linked list.  Adding the list_head option
to __alloc_pages_bulk() was a colossal mistake.  Don't perpetuate it.
These pages are going into an array anyway.  Don't put them on a list
first.

struct vmemmap_remap_walk - walk vmemmap page table

* @vmemmap_pages:  the list head of the vmemmap pages that can be freed
*                  or is mapped from.

At present, the struct vmemmap_remap_walk use a list for vmemmap page table walk, so do you mean we need change vmemmap_pages from a list to a array firstly and then use array bulk api, even kill list bulk api ?

It'll be a little complex for hugetlb_vmemmap. Should it be reasonable to
directly use __alloc_pages_bulk in hugetlb_vmemmap itself?


We could use alloc_pages_bulk_array_node() here without introduce a new
alloc_pages_bulk_list_node(), only focus on accelerate page allocation
for now.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux