On 19/12/2018 14:40, Aneesh Kumar K.V wrote: > This helper does a get_user_pages_fast and if it find pages in the CMA area > it will try to migrate them before taking page reference. This makes sure that > we don't keep non-movable pages (due to page reference count) in the CMA area. > Not able to move pages out of CMA area result in CMA allocation failures. > > Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxx> > --- > include/linux/hugetlb.h | 2 + > include/linux/migrate.h | 3 + > mm/hugetlb.c | 4 +- > mm/migrate.c | 139 ++++++++++++++++++++++++++++++++++++++++ > 4 files changed, 146 insertions(+), 2 deletions(-) > > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h > index 087fd5f48c91..1eed0cdaec0e 100644 > --- a/include/linux/hugetlb.h > +++ b/include/linux/hugetlb.h > @@ -371,6 +371,8 @@ struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, > nodemask_t *nmask); > struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma, > unsigned long address); > +struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask, > + int nid, nodemask_t *nmask); > int huge_add_to_page_cache(struct page *page, struct address_space *mapping, > pgoff_t idx); > > diff --git a/include/linux/migrate.h b/include/linux/migrate.h > index f2b4abbca55e..d82b35afd2eb 100644 > --- a/include/linux/migrate.h > +++ b/include/linux/migrate.h > @@ -286,6 +286,9 @@ static inline int migrate_vma(const struct migrate_vma_ops *ops, > } > #endif /* IS_ENABLED(CONFIG_MIGRATE_VMA_HELPER) */ > > +extern int get_user_pages_cma_migrate(unsigned long start, int nr_pages, int write, > + struct page **pages); ah, sorry for commenting the same patch again but ./scripts/checkpatch.pl complains a log on this patch. -- Alexey