Re: [PATCH v9 4/5] mm/sparse-vmemmap: improve memory savings for compound devmaps

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 20 Apr 2022 16:53:09 +0100 Joao Martins <joao.m.martins@xxxxxxxxxx> wrote:

> A compound devmap is a dev_pagemap with @vmemmap_shift > 0 and it
> means that pages are mapped at a given huge page alignment and utilize
> uses compound pages as opposed to order-0 pages.
> 
> Take advantage of the fact that most tail pages look the same (except
> the first two) to minimize struct page overhead. Allocate a separate
> page for the vmemmap area which contains the head page and separate for
> the next 64 pages. The rest of the subsections then reuse this tail
> vmemmap page to initialize the rest of the tail pages.
> 
> Sections are arch-dependent (e.g. on x86 it's 64M, 128M or 512M) and
> when initializing compound devmap with big enough @vmemmap_shift (e.g.
> 1G PUD) it may cross multiple sections. The vmemmap code needs to
> consult @pgmap so that multiple sections that all map the same tail
> data can refer back to the first copy of that data for a given
> gigantic page.
> 
> On compound devmaps with 2M align, this mechanism lets 6 pages be
> saved out of the 8 necessary PFNs necessary to set the subsection's
> 512 struct pages being mapped. On a 1G compound devmap it saves
> 4094 pages.
> 
> Altmap isn't supported yet, given various restrictions in altmap pfn
> allocator, thus fallback to the already in use vmemmap_populate().  It
> is worth noting that altmap for devmap mappings was there to relieve the
> pressure of inordinate amounts of memmap space to map terabytes of pmem.
> With compound pages the motivation for altmaps for pmem gets reduced.
> 
> ...
>
> @@ -665,12 +770,19 @@ struct page * __meminit __populate_section_memmap(unsigned long pfn,
>  {
>  	unsigned long start = (unsigned long) pfn_to_page(pfn);
>  	unsigned long end = start + nr_pages * sizeof(struct page);
> +	int r;
>  
>  	if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) ||
>  		!IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION)))
>  		return NULL;
>  
> -	if (vmemmap_populate(start, end, nid, altmap))
> +	if (is_power_of_2(sizeof(struct page)) &&

Note that Muchun is working on a compile-time
STRUCT_PAGE_SIZE_IS_POWER_OF_2 which this site should be able to
utilize.

https://lkml.kernel.org/r/20220413144748.84106-2-songmuchun@xxxxxxxxxxxxx

> +	    pgmap && pgmap_vmemmap_nr(pgmap) > 1 && !altmap)
> +		r = vmemmap_populate_compound_pages(pfn, start, end, nid, pgmap);
> +	else
> +		r = vmemmap_populate(start, end, nid, altmap);
> +
> +	if (r < 0)
>  		return NULL;
>  
>  	return pfn_to_page(pfn);





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux