On 2023/9/6 11:14, Matthew Wilcox wrote:
On Wed, Sep 06, 2023 at 11:13:27AM +0800, Kefeng Wang wrote:
On 2023/9/6 10:47, Matthew Wilcox wrote:
On Tue, Sep 05, 2023 at 06:35:08PM +0800, Kefeng Wang wrote:
It is needed 4095 pages(1G) or 7 pages(2M) to be allocated once in
alloc_vmemmap_page_list(), so let's add a bulk allocator varietas
alloc_pages_bulk_list_node() and switch alloc_vmemmap_page_list()
to use it to accelerate page allocation.
Argh, no, please don't do this.
Iterating a linked list is _expensive_. It is about 10x quicker to
iterate an array than a linked list. Adding the list_head option
to __alloc_pages_bulk() was a colossal mistake. Don't perpetuate it.
These pages are going into an array anyway. Don't put them on a list
first.
struct vmemmap_remap_walk - walk vmemmap page table
* @vmemmap_pages: the list head of the vmemmap pages that can be freed
* or is mapped from.
At present, the struct vmemmap_remap_walk use a list for vmemmap page table
walk, so do you mean we need change vmemmap_pages from a list to a array
firstly and then use array bulk api, even kill list bulk api ?
That would be better, yes.
Maybe not quick to convert vmemmap_remap_walk to use array, will use
page array alloc bulk api firstly and won't introduce a new
alloc_pages_bulk_list_node(), thanks.