On 04/21/2017 02:17 AM, Zi Yan wrote: > From: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx> > > This patch enables thp migration for mbind(2) and migrate_pages(2). > > Signed-off-by: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx> > --- > ChangeLog v1 -> v2: > - support pte-mapped and doubly-mapped thp > --- > mm/mempolicy.c | 108 +++++++++++++++++++++++++++++++++++++++++---------------- > 1 file changed, 79 insertions(+), 29 deletions(-) Snip > @@ -981,7 +1012,17 @@ static struct page *new_node_page(struct page *page, unsigned long node, int **x > if (PageHuge(page)) > return alloc_huge_page_node(page_hstate(compound_head(page)), > node); > - else > + else if (thp_migration_supported() && PageTransHuge(page)) { > + struct page *thp; > + > + thp = alloc_pages_node(node, > + (GFP_TRANSHUGE | __GFP_THISNODE) & ~__GFP_RECLAIM, > + HPAGE_PMD_ORDER); > + if (!thp) > + return NULL; > + prep_transhuge_page(thp); > + return thp; > + } else > return __alloc_pages_node(node, GFP_HIGHUSER_MOVABLE | > __GFP_THISNODE, 0); > } > @@ -1147,6 +1188,15 @@ static struct page *new_page(struct page *page, unsigned long start, int **x) > if (PageHuge(page)) { > BUG_ON(!vma); > return alloc_huge_page_noerr(vma, address, 1); > + } else if (thp_migration_supported() && PageTransHuge(page)) { > + struct page *thp; > + > + thp = alloc_hugepage_vma(GFP_TRANSHUGE, vma, address, > + HPAGE_PMD_ORDER); > + if (!thp) > + return NULL; > + prep_transhuge_page(thp); > + return thp; GFP flags in both these new page allocation functions should be the same. Does alloc_hugepage_vma() will eventually call page allocation with the following flags. (GFP_TRANSHUGE | __GFP_THISNODE) & ~__GFP_RECLAIM -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>