Re: [PATCH v3 09/14] mm/page_alloc: reuse tail struct pages for compound pagemaps

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 14, 2021 at 12:36 PM Joao Martins <joao.m.martins@xxxxxxxxxx> wrote:
>
> Currently memmap_init_zone_device() ends up initializing 32768 pages
> when it only needs to initialize 128 given tail page reuse. That
> number is worse with 1GB compound page geometries, 262144 instead of
> 128. Update memmap_init_zone_device() to skip redundant
> initialization, detailed below.
>
> When a pgmap @geometry is set, all pages are mapped at a given huge page
> alignment and use compound pages to describe them as opposed to a
> struct per 4K.
>
> With @geometry > PAGE_SIZE and when struct pages are stored in ram
> (!altmap) most tail pages are reused. Consequently, the amount of unique
> struct pages is a lot smaller that the total amount of struct pages
> being mapped.
>
> The altmap path is left alone since it does not support memory savings
> based on compound pagemap geometries.
>
> Signed-off-by: Joao Martins <joao.m.martins@xxxxxxxxxx>
> ---
>  mm/page_alloc.c | 14 +++++++++++++-
>  1 file changed, 13 insertions(+), 1 deletion(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 188cb5f8c308..96975edac0a8 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -6600,11 +6600,23 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn,
>  static void __ref memmap_init_compound(struct page *page, unsigned long pfn,
>                                         unsigned long zone_idx, int nid,
>                                         struct dev_pagemap *pgmap,
> +                                       struct vmem_altmap *altmap,
>                                         unsigned long nr_pages)
>  {
>         unsigned int order_align = order_base_2(nr_pages);
>         unsigned long i;
>
> +       /*
> +        * With compound page geometry and when struct pages are stored in ram
> +        * (!altmap) most tail pages are reused. Consequently, the amount of
> +        * unique struct pages to initialize is a lot smaller that the total
> +        * amount of struct pages being mapped.
> +        * See vmemmap_populate_compound_pages().
> +        */
> +       if (!altmap)
> +               nr_pages = min_t(unsigned long, nr_pages,

What's the scenario where nr_pages is < 128? Shouldn't alignment
already be guaranteed?

> +                                2 * (PAGE_SIZE/sizeof(struct page)));


> +
>         __SetPageHead(page);
>
>         for (i = 1; i < nr_pages; i++) {
> @@ -6657,7 +6669,7 @@ void __ref memmap_init_zone_device(struct zone *zone,
>                         continue;
>
>                 memmap_init_compound(page, pfn, zone_idx, nid, pgmap,
> -                                    pfns_per_compound);
> +                                    altmap, pfns_per_compound);

This feels odd, memmap_init_compound() doesn't really care about
altmap, what do you think about explicitly calculating the parameters
that memmap_init_compound() needs and passing them in?

Not a strong requirement to change, but take another look at let me know.



>         }
>
>         pr_info("%s initialised %lu pages in %ums\n", __func__,
> --
> 2.17.1
>



[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux