The patch titled Subject: mm: mempolicy: don't have to split pmd for huge zero page has been removed from the -mm tree. Its filename was mm-mempolicy-dont-have-to-split-pmd-for-huge-zero-page.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: Yang Shi <shy828301@xxxxxxxxx> Subject: mm: mempolicy: don't have to split pmd for huge zero page When trying to migrate pages to obey mempolicy, the huge zero page is split by inserting base zero pfn to all PTEs, then the page table walk fallback to PTE level and just skips zero page. Skipping zero page for mempolicy has been the behavior of kernel since v2.6.16 due to commit f4598c8b3678 ("[PATCH] migration: make sure there is no attempt to migrate reserved pages."). So it seems pointless to split huge zero page, it could be just skipped like base zero page. Set ACTION_CONTINUE to prevent the walk_page_range() split the pmd for this case. Link: https://lkml.kernel.org/r/20210609172146.3594-1-shy828301@xxxxxxxxx Link: https://lkml.kernel.org/r/20210604203513.240709-1-shy828301@xxxxxxxxx Signed-off-by: Yang Shi <shy828301@xxxxxxxxx> Reviewed-by: Zi Yan <ziy@xxxxxxxxxx> Acked-by: Michal Hocko <mhocko@xxxxxxxx> Cc: Naoya Horiguchi <nao.horiguchi@xxxxxxxxx> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/mempolicy.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) --- a/mm/mempolicy.c~mm-mempolicy-dont-have-to-split-pmd-for-huge-zero-page +++ a/mm/mempolicy.c @@ -437,7 +437,8 @@ static inline bool queue_pages_required( /* * queue_pages_pmd() has four possible return values: - * 0 - pages are placed on the right node or queued successfully. + * 0 - pages are placed on the right node or queued successfully, or + * special page is met, i.e. huge zero page. * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were * specified. * 2 - THP was split. @@ -461,8 +462,7 @@ static int queue_pages_pmd(pmd_t *pmd, s page = pmd_page(*pmd); if (is_huge_zero_page(page)) { spin_unlock(ptl); - __split_huge_pmd(walk->vma, pmd, addr, false, NULL); - ret = 2; + walk->action = ACTION_CONTINUE; goto out; } if (!queue_pages_required(page, qp)) @@ -489,7 +489,8 @@ out: * and move them to the pagelist if they do. * * queue_pages_pte_range() has three possible return values: - * 0 - pages are placed on the right node or queued successfully. + * 0 - pages are placed on the right node or queued successfully, or + * special page is met, i.e. zero page. * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were * specified. * -EIO - only MPOL_MF_STRICT was specified and an existing page was already _ Patches currently in -mm which might be from shy828301@xxxxxxxxx are