On 7/15/20 7:05 AM, js1304@xxxxxxxxx wrote: > From: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> > > new_non_cma_page() in gup.c requires to allocate the new page that is not > on the CMA area. new_non_cma_page() implements it by using allocation > scope APIs. > > However, there is a work-around for hugetlb. Normal hugetlb page > allocation API for migration is alloc_huge_page_nodemask(). It consists > of two steps. First is dequeing from the pool. Second is, if there is no > available page on the queue, allocating by using the page allocator. > > new_non_cma_page() can't use this API since first step (deque) isn't > aware of scope API to exclude CMA area. So, new_non_cma_page() exports > hugetlb internal function for the second step, alloc_migrate_huge_page(), > to global scope and uses it directly. This is suboptimal since hugetlb > pages on the queue cannot be utilized. > > This patch tries to fix this situation by making the deque function on > hugetlb CMA aware. In the deque function, CMA memory is skipped if > PF_MEMALLOC_NOCMA flag is found. > > Acked-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Acked-by: Vlastimil Babka <vbabka@xxxxxxx>