The patch titled Subject: mm/mremap: start addresses are properly aligned has been added to the -mm tree. Its filename is mm-mremap-start-addresses-are-properly-aligned.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-mremap-start-addresses-are-properly-aligned.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-mremap-start-addresses-are-properly-aligned.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Wei Yang <richardw.yang@xxxxxxxxxxxxxxx> Subject: mm/mremap: start addresses are properly aligned After previous cleanup, extent is the minimal step for both source and destination. This means when extent is HPAGE_PMD_SIZE or PMD_SIZE, old_addr and new_addr are properly aligned too. Since these two functions are only invoked in move_page_tables, it is safe to remove the check now. Link: http://lkml.kernel.org/r/20200117232254.2792-6-richardw.yang@xxxxxxxxxxxxxxx Signed-off-by: Wei Yang <richardw.yang@xxxxxxxxxxxxxxx> Cc: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxx> Cc: Dan Williams <dan.j.williams@xxxxxxxxx> Cc: Kirill A. Shutemov <kirill@xxxxxxxxxxxxx> Cc: Thomas Hellstrom <thellstrom@xxxxxxxxxx> Cc: Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/huge_memory.c | 3 --- mm/mremap.c | 3 --- 2 files changed, 6 deletions(-) --- a/mm/huge_memory.c~mm-mremap-start-addresses-are-properly-aligned +++ a/mm/huge_memory.c @@ -1878,9 +1878,6 @@ bool move_huge_pmd(struct vm_area_struct struct mm_struct *mm = vma->vm_mm; bool force_flush = false; - if ((old_addr & ~HPAGE_PMD_MASK) || (new_addr & ~HPAGE_PMD_MASK)) - return false; - /* * The destination pmd shouldn't be established, free_pgtables() * should have release it. --- a/mm/mremap.c~mm-mremap-start-addresses-are-properly-aligned +++ a/mm/mremap.c @@ -199,9 +199,6 @@ static bool move_normal_pmd(struct vm_ar struct mm_struct *mm = vma->vm_mm; pmd_t pmd; - if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)) - return false; - /* * The destination pmd shouldn't be established, free_pgtables() * should have release it. _ Patches currently in -mm which might be from richardw.yang@xxxxxxxxxxxxxxx are mm-thp-remove-the-defer-list-related-code-since-this-will-not-happen.patch mm-gupc-use-is_vm_hugetlb_page-to-check-whether-to-follow-huge.patch mm-mremap-format-the-check-in-move_normal_pmd-same-as-move_huge_pmd.patch mm-mremap-it-is-sure-to-have-enough-space-when-extent-meets-requirement.patch mm-mremap-use-pmd_addr_end-to-calculate-next-in-move_page_tables.patch mm-mremap-calculate-extent-in-one-place.patch mm-mremap-start-addresses-are-properly-aligned.patch mm-huge_memoryc-use-head-to-check-huge-zero-page.patch mm-huge_memoryc-use-head-to-emphasize-the-purpose-of-page.patch mm-huge_memoryc-reduce-critical-section-protected-by-split_queue_lock.patch mm-remove-dead-code-totalram_pages_set.patch