Re: + mm-page_alloc-fix-memmap_init_zone-pageblock-alignment.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 2, 2018 at 9:59 PM,  <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
>
> The patch titled
>      Subject: mm/page_alloc: fix memmap_init_zone pageblock alignment
> has been added to the -mm tree.  Its filename is
>      mm-page_alloc-fix-memmap_init_zone-pageblock-alignment.patch
>
> This patch should soon appear at
>     http://ozlabs.org/~akpm/mmots/broken-out/mm-page_alloc-fix-memmap_init_zone-pageblock-alignment.patch
> and later at
>     http://ozlabs.org/~akpm/mmotm/broken-out/mm-page_alloc-fix-memmap_init_zone-pageblock-alignment.patch
>
> Before you just go and hit "reply", please:
>    a) Consider who else should be cc'ed
>    b) Prefer to cc a suitable mailing list as well
>    c) Ideally: find the original patch on the mailing list and do a
>       reply-to-all to that, adding suitable additional cc's
>
> *** Remember to use Documentation/SubmitChecklist when testing your code ***

Documentation/process/submit-checklist.rst nowadays, btw.

> The -mm tree is included into linux-next and is updated
> there every 3-4 working days

Actually this should go directly to v4.16-rc4. Shall I cc Linus for v3
I'm about to send?
Or do you think it's fine for -next and -stable and we keep it like this?

--nX

>
> ------------------------------------------------------
> From: Daniel Vacek <neelx@xxxxxxxxxx>
> Subject: mm/page_alloc: fix memmap_init_zone pageblock alignment
>
> BUG at mm/page_alloc.c:1913
>
>>       VM_BUG_ON(page_zone(start_page) != page_zone(end_page));
>
> Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns
> where possible") introduced a bug where move_freepages() triggers a
> VM_BUG_ON() on uninitialized page structure due to pageblock alignment.
> To fix this, simply align the skipped pfns in memmap_init_zone() the same
> way as in move_freepages_block().
>
> Link: http://lkml.kernel.org/r/1519988497-28941-1-git-send-email-neelx@xxxxxxxxxx
> Fixes: b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns where possible")
> Signed-off-by: Daniel Vacek <neelx@xxxxxxxxxx>
> Cc: Michal Hocko <mhocko@xxxxxxxx>
> Cc: Vlastimil Babka <vbabka@xxxxxxx>
> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
> Cc: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx>
> Cc: Paul Burton <paul.burton@xxxxxxxxxx>
> Cc: <stable@xxxxxxxxxxxxxxx>
> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> ---
>
>  mm/memblock.c   |   13 ++++++-------
>  mm/page_alloc.c |    9 +++++++--
>  2 files changed, 13 insertions(+), 9 deletions(-)
>
> diff -puN mm/memblock.c~mm-page_alloc-fix-memmap_init_zone-pageblock-alignment mm/memblock.c
> --- a/mm/memblock.c~mm-page_alloc-fix-memmap_init_zone-pageblock-alignment
> +++ a/mm/memblock.c
> @@ -1101,13 +1101,12 @@ void __init_memblock __next_mem_pfn_rang
>                 *out_nid = r->nid;
>  }
>
> -unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn,
> -                                                     unsigned long max_pfn)
> +unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn)
>  {
>         struct memblock_type *type = &memblock.memory;
>         unsigned int right = type->cnt;
>         unsigned int mid, left = 0;
> -       phys_addr_t addr = PFN_PHYS(pfn + 1);
> +       phys_addr_t addr = PFN_PHYS(++pfn);
>
>         do {
>                 mid = (right + left) / 2;
> @@ -1118,15 +1117,15 @@ unsigned long __init_memblock memblock_n
>                                   type->regions[mid].size))
>                         left = mid + 1;
>                 else {
> -                       /* addr is within the region, so pfn + 1 is valid */
> -                       return min(pfn + 1, max_pfn);
> +                       /* addr is within the region, so pfn is valid */
> +                       return pfn;
>                 }
>         } while (left < right);
>
>         if (right == type->cnt)
> -               return max_pfn;
> +               return -1UL;
>         else
> -               return min(PHYS_PFN(type->regions[right].base), max_pfn);
> +               return PHYS_PFN(type->regions[right].base);
>  }
>
>  /**
> diff -puN mm/page_alloc.c~mm-page_alloc-fix-memmap_init_zone-pageblock-alignment mm/page_alloc.c
> --- a/mm/page_alloc.c~mm-page_alloc-fix-memmap_init_zone-pageblock-alignment
> +++ a/mm/page_alloc.c
> @@ -5359,9 +5359,14 @@ void __meminit memmap_init_zone(unsigned
>                         /*
>                          * Skip to the pfn preceding the next valid one (or
>                          * end_pfn), such that we hit a valid pfn (or end_pfn)
> -                        * on our next iteration of the loop.
> +                        * on our next iteration of the loop. Note that it needs
> +                        * to be pageblock aligned even when the region itself
> +                        * is not as move_freepages_block() can shift ahead of
> +                        * the valid region but still depends on correct page
> +                        * metadata.
>                          */
> -                       pfn = memblock_next_valid_pfn(pfn, end_pfn) - 1;
> +                       pfn = (memblock_next_valid_pfn(pfn) &
> +                                       ~(pageblock_nr_pages-1)) - 1;
>  #endif
>                         continue;
>                 }
> _
>
> Patches currently in -mm which might be from neelx@xxxxxxxxxx are
>
> mm-page_alloc-fix-memmap_init_zone-pageblock-alignment.patch
>



[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]