Re: [PATCH v3 08/14] mm/sparse-vmemmap: populate compound pagemaps

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 7/28/21 7:03 PM, Dan Williams wrote:
> On Wed, Jul 28, 2021 at 8:36 AM Joao Martins <joao.m.martins@xxxxxxxxxx> wrote:
> [..]
>> +/*
>> + * For compound pages bigger than section size (e.g. x86 1G compound
>> + * pages with 2M subsection size) fill the rest of sections as tail
>> + * pages.
>> + *
>> + * Note that memremap_pages() resets @nr_range value and will increment
>> + * it after each range successful onlining. Thus the value or @nr_range
>> + * at section memmap populate corresponds to the in-progress range
>> + * being onlined here.
>> + */
>> +static bool compound_section_index(unsigned long start_pfn,
> 
> Oh, I was thinking this would return the actual Nth index number for
> the section within the compound page. 
> A bool is ok too, but then the
> function name would be something like:
> 
> reuse_compound_section()
> 
> ...right?
> 
Yes.

> 
> [..]
>> [...] And here's compound_section_tail_huge_page() (for the last patch in the series):
>>
>>
>> @@ -690,6 +727,33 @@ static struct page * __meminit compound_section_tail_page(unsigned
>> long addr)
>>         return pte_page(*ptep);
>>  }
>>
>> +static struct page * __meminit compound_section_tail_huge_page(unsigned long addr,
>> +                               unsigned long offset, struct dev_pagemap *pgmap)
>> +{
>> +       unsigned long geometry_size = pgmap_geometry(pgmap) << PAGE_SHIFT;
>> +       pmd_t *pmdp;
>> +
>> +       addr -= PAGE_SIZE;
>> +
>> +       /*
>> +        * Assuming sections are populated sequentially, the previous section's
>> +        * page data can be reused.
>> +        */
>> +       pmdp = pmd_off_k(addr);
>> +       if (!pmdp)
>> +               return ERR_PTR(-ENOMEM);
>> +
>> +       /*
>> +        * Reuse the tail pages vmemmap pmd page
>> +        * See layout diagram in Documentation/vm/vmemmap_dedup.rst
>> +        */
>> +       if (offset % geometry_size > PFN_PHYS(PAGES_PER_SECTION))
>> +               return pmd_page(*pmdp);
>> +
Maybe I can drop the geometry_size and just do here:

if (PHYS_PFN(offset) % pgmap_geometry(pgmap) > PAGES_PER_SECTION)

and thus drop the geometry_size variable.

>> +       /* No reusable PMD fallback to PTE tail page*/
>> +       return NULL;
>> +}
>> +
>>  static int __meminit vmemmap_populate_compound_pages(unsigned long start_pfn,
>>                                                      unsigned long start,
>>                                                      unsigned long end, int node,
>> @@ -697,14 +761,22 @@ static int __meminit vmemmap_populate_compound_pages(unsigned long
>> start_pfn,
>>  {
>>         unsigned long offset, size, addr;
>>
>> -       if (compound_section_index(start_pfn, pgmap)) {
>> -               struct page *page;
>> +       if (compound_section_index(start_pfn, pgmap, &offset)) {
>> +               struct page *page, *hpage;
>> +
>> +               hpage = compound_section_tail_huge_page(addr, offset);
>> +               if (IS_ERR(hpage))
>> +                       return -ENOMEM;
>> +               else if (hpage)
> 
> No need for "else" after return... other than that these helpers and
> this arrangement looks good to me.
> 
OK.



[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux