The patch titled Subject: mm/mremap: use pmd_addr_end to calculate next in move_page_tables() has been added to the -mm tree. Its filename is mm-mremap-use-pmd_addr_end-to-calculate-next-in-move_page_tables.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-mremap-use-pmd_addr_end-to-calculate-next-in-move_page_tables.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-mremap-use-pmd_addr_end-to-calculate-next-in-move_page_tables.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Wei Yang <richardw.yang@xxxxxxxxxxxxxxx> Subject: mm/mremap: use pmd_addr_end to calculate next in move_page_tables() Use the general helper instead of doing it by hand. Link: http://lkml.kernel.org/r/20200117232254.2792-4-richardw.yang@xxxxxxxxxxxxxxx Signed-off-by: Wei Yang <richardw.yang@xxxxxxxxxxxxxxx> Cc: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxx> Cc: Dan Williams <dan.j.williams@xxxxxxxxx> Cc: Kirill A. Shutemov <kirill@xxxxxxxxxxxxx> Cc: Thomas Hellstrom <thellstrom@xxxxxxxxxx> Cc: Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/mremap.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) --- a/mm/mremap.c~mm-mremap-use-pmd_addr_end-to-calculate-next-in-move_page_tables +++ a/mm/mremap.c @@ -253,11 +253,8 @@ unsigned long move_page_tables(struct vm for (; old_addr < old_end; old_addr += extent, new_addr += extent) { cond_resched(); - next = (old_addr + PMD_SIZE) & PMD_MASK; - /* even if next overflowed, extent below will be ok */ + next = pmd_addr_end(old_addr, old_end); extent = next - old_addr; - if (extent > old_end - old_addr) - extent = old_end - old_addr; old_pmd = get_old_pmd(vma->vm_mm, old_addr); if (!old_pmd) continue; @@ -301,7 +298,7 @@ unsigned long move_page_tables(struct vm if (pte_alloc(new_vma->vm_mm, new_pmd)) break; - next = (new_addr + PMD_SIZE) & PMD_MASK; + next = pmd_addr_end(new_addr, new_addr + len); if (extent > next - new_addr) extent = next - new_addr; move_ptes(vma, old_pmd, old_addr, old_addr + extent, new_vma, _ Patches currently in -mm which might be from richardw.yang@xxxxxxxxxxxxxxx are mm-thp-remove-the-defer-list-related-code-since-this-will-not-happen.patch mm-gupc-use-is_vm_hugetlb_page-to-check-whether-to-follow-huge.patch mm-mremap-format-the-check-in-move_normal_pmd-same-as-move_huge_pmd.patch mm-mremap-it-is-sure-to-have-enough-space-when-extent-meets-requirement.patch mm-mremap-use-pmd_addr_end-to-calculate-next-in-move_page_tables.patch mm-mremap-calculate-extent-in-one-place.patch mm-mremap-start-addresses-are-properly-aligned.patch mm-huge_memoryc-use-head-to-check-huge-zero-page.patch mm-huge_memoryc-use-head-to-emphasize-the-purpose-of-page.patch mm-huge_memoryc-reduce-critical-section-protected-by-split_queue_lock.patch mm-remove-dead-code-totalram_pages_set.patch