2020년 6월 9일 (화) 오후 10:53, Michal Hocko <mhocko@xxxxxxxxxx>님이 작성: > > On Wed 27-05-20 15:44:57, Joonsoo Kim wrote: > > From: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> > > > > There is a user who do not want to use CMA memory for migration. Until > > now, it is implemented by caller side but it's not optimal since there > > is limited information on caller. This patch implements it on callee side > > to get better result. > > I do not follow this changelog and honestly do not see an improvement. > skip_cma in the alloc_control sound like a hack to me. I can now see new_non_cma_page() want to allocate the new page that is not on the CMA area. new_non_cma_page() implements it by not specifying __GFP_MOVALBE mask or removing this mask. hugetlb page allocation has two steps. First is dequeing from the pool. And, if there is no available page on the pool, allocating from the page allocator. new_non_cma_page() can control allocating from the page allocator in hugetlb via the gfp flags. However, dequeing cannot be controlled by this way so it skips dequeing completely. This is why new_non_cma_page() uses alloc_migrate_huge_page() instead of alloc_huge_page_nodemask(). My patch makes hugetlb code CMA aware so that new_non_cma_page() can get the benefit of the hugetlb pool. > why your earlier patch has started to or the given gfp_mask. If anything > this should be folded here. But even then I do not like a partial > gfp_mask (__GFP_NOWARN on its own really has GFP_NOWAIT like semantic). Will not use partial gfp_mask. Thanks.