For the type of 'ALLOC_HARDER' page allocation, there is an express highway for the whole process which lead the allocation reach __rmqueue_xxx easier than other type. However, when CMA is enabled, the free_page within zone_watermark_ok() will be deducted for number the pages in CMA type, which may cause the watermark check fail, but there are possible enough HighAtomic or Unmovable and Reclaimable pages in the zone. So add 'alloc_harder' here to count CMA pages in to clean the obstacles on the way to the final. Signed-off-by: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxxxxxx> --- mm/page_alloc.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 635d7dd..cc18620 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3045,8 +3045,11 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, #ifdef CONFIG_CMA - /* If allocation can't use CMA areas don't use free CMA pages */ - if (!(alloc_flags & ALLOC_CMA)) + /* + * If allocation can't use CMA areas and no alloc_harder set for none + * order0 allocation, don't use free CMA pages. + */ + if (!(alloc_flags & ALLOC_CMA) && (!alloc_harder || !order)) free_pages -= zone_page_state(z, NR_FREE_CMA_PAGES); #endif -- 1.9.1