Re: [PATCH v3] mm/hugetlb_vmemmap: remap head page to newly allocated page

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Nov 10, 2022, at 04:06, Joao Martins <joao.m.martins@xxxxxxxxxx> wrote:
> 
> Today with `hugetlb_free_vmemmap=on` the struct page memory that is freed
> back to page allocator is as following: for a 2M hugetlb page it will reuse
> the first 4K vmemmap page to remap the remaining 7 vmemmap pages, and for a
> 1G hugetlb it will remap the remaining 4095 vmemmap pages. Essentially,
> that means that it breaks the first 4K of a potentially contiguous chunk of
> memory of 32K (for 2M hugetlb pages) or 16M (for 1G hugetlb pages). For
> this reason the memory that it's free back to page allocator cannot be used
> for hugetlb to allocate huge pages of the same size, but rather only of a
> smaller huge page size:
> 
> Trying to assign a 64G node to hugetlb (on a 128G 2node guest, each node
> having 64G):
> 
> * Before allocation:
> Free pages count per migrate type at order       0      1      2      3
> 4      5      6      7      8      9     10
> ...
> Node    0, zone   Normal, type      Movable    340    100     32     15
> 1      2      0      0      0      1  15558
> 
> $ echo 32768 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> $ cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> 31987
> 
> * After:
> 
> Node    0, zone   Normal, type      Movable  30893  32006  31515      7
> 0      0      0      0      0      0      0
> 
> Notice how the memory freed back are put back into 4K / 8K / 16K page
> pools. And it allocates a total of 31987 pages (63974M).
> 
> To fix this behaviour rather than remapping second vmemmap page (thus
> breaking the contiguous block of memory backing the struct pages)
> repopulate the first vmemmap page with a new one. We allocate and copy
> from the currently mapped vmemmap page, and then remap it later on.
> The same algorithm works if there's a pre initialized walk::reuse_page
> and the head page doesn't need to be skipped and instead we remap it
> when the @addr being changed is the @reuse_addr.
> 
> The new head page is allocated in vmemmap_remap_free() given that on
> restore there's no need for functional change. Note that, because right
> now one hugepage is remapped at a time, thus only one free 4K page at a
> time is needed to remap the head page. Should it fail to allocate said
> new page, it reuses the one that's already mapped just like before. As a
> result, for every 64G of contiguous hugepages it can give back 1G more
> of contiguous memory per 64G, while needing in total 128M new 4K pages
> (for 2M hugetlb) or 256k (for 1G hugetlb).
> 
> After the changes, try to assign a 64G node to hugetlb (on a 128G 2node
> guest, each node with 64G):
> 
> * Before allocation
> Free pages count per migrate type at order       0      1      2      3
> 4      5      6      7      8      9     10
> ...
> Node    0, zone   Normal, type      Movable      1      1      1      0
> 0      1      0      0      1      1  15564
> 
> $ echo 32768  > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> $ cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> 32394
> 
> * After:
> 
> Node    0, zone   Normal, type      Movable      0     50     97    108
> 96     81     70     46     18      0      0
> 
> In the example above, 407 more hugeltb 2M pages are allocated i.e. 814M out
> of the 32394 (64788M) allocated. So the memory freed back is indeed being
> used back in hugetlb and there's no massive order-0..order-2 pages
> accumulated unused.
> 
> Signed-off-by: Joao Martins <joao.m.martins@xxxxxxxxxx>

Thanks.

Reviewed-by: Muchun Song <songmuchun@xxxxxxxxxxxxx>

A nit below.

> ---
> Changes since v2:
> Comments from Muchun:
> * Delete the comment above the tlb flush
> * Move the head vmemmap page copy into vmemmap_remap_free()
> * Add and del the new head page to the vmemmap_pages (to be freed
>   in case of error)
> * Move the remap of the head like the tail pages in vmemmap_remap_pte()
>   but special casing only when addr == reuse_Addr
> * Removes the PAGE_SIZE alignment check as the code has the assumption
>   that start/end are page-aligned (and VM_BUG_ON otherwise).
> * Adjusted commit message taking the above changes into account.
> ---
> mm/hugetlb_vmemmap.c | 34 +++++++++++++++++++++++++++-------
> 1 file changed, 27 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index 7898c2c75e35..f562b3f46410 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -203,12 +203,7 @@ static int vmemmap_remap_range(unsigned long start, unsigned long end,
> return ret;
> } while (pgd++, addr = next, addr != end);
> 
> - /*
> - * We only change the mapping of the vmemmap virtual address range
> - * [@start + PAGE_SIZE, end), so we only need to flush the TLB which
> - * belongs to the range.
> - */
> - flush_tlb_kernel_range(start + PAGE_SIZE, end);
> + flush_tlb_kernel_range(start, end);
> 
> return 0;
> }
> @@ -244,9 +239,16 @@ static void vmemmap_remap_pte(pte_t *pte, unsigned long addr,
> * to the tail pages.
> */
> pgprot_t pgprot = PAGE_KERNEL_RO;
> - pte_t entry = mk_pte(walk->reuse_page, pgprot);
> struct page *page = pte_page(*pte);
> + pte_t entry;
> 
> + /* Remapping the head page requires r/w */
> + if (unlikely(addr == walk->reuse_addr)) {
> + pgprot = PAGE_KERNEL;
> + list_del(&walk->reuse_page->lru);

Maybe smp_wmb() should be inserted here to make sure the copied data is visible
before set_pte_at() like the commit 939de63d35dde45 does.






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux