The patch titled Subject: mm: optimization on page allocation when CMA enabled has been added to the -mm mm-unstable branch. Its filename is mm-optimization-on-page-allocation-when-cma-enabled.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-optimization-on-page-allocation-when-cma-enabled.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx> Subject: mm: optimization on page allocation when CMA enabled Date: Thu, 11 May 2023 13:22:30 +0800 Let us look at the timeline of scenarios below with WMARK_LOW=25MB WMARK_MIN=5MB (managed pages 1.9GB). We can find that CMA begin to be used until 'C' under the method of 'fixed 2 times of free cma over free pages' which could have the scenario 'A' and 'B' into a fault state, that is, free UNMOVABLE & RECLAIMABLE pages is lower than corresponding watermark without reclaiming which should be deemed as against current memory policy. This commit try to solve this by checking zone_watermark_ok again with removing CMA pages which could lead to a proper time point of CMA's utilization. -- Free_pages | | -- WMARK_LOW | -- Free_CMA | | -- Free_CMA/Free_pages(MB) A(12/30) --> B(12/25) --> C(12/20) fixed 1/2 ratio N N Y this commit Y Y Y Link: https://lkml.kernel.org/r/1683782550-25799-1-git-send-email-zhaoyang.huang@xxxxxxxxxx Signed-off-by: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: ke.wang <ke.wang@xxxxxxxxxx> Cc: Minchan Kim <minchan@xxxxxxxxxx> Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx> Cc: Zhaoyang Huang <huangzhaoyang@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page_alloc.c | 44 ++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 40 insertions(+), 4 deletions(-) --- a/mm/page_alloc.c~mm-optimization-on-page-allocation-when-cma-enabled +++ a/mm/page_alloc.c @@ -2275,6 +2275,43 @@ do_steal: } +#ifdef CONFIG_CMA +/* + * GFP_MOVABLE allocation could drain UNMOVABLE & RECLAIMABLE page blocks via + * the help of CMA which makes GFP_KERNEL failed. Checking if zone_watermark_ok + * again without ALLOC_CMA to see if to use CMA first. + */ +static bool use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) +{ + unsigned long watermark; + bool cma_first = false; + + watermark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); + /* check if GFP_MOVABLE pass previous zone_watermark_ok via the help of CMA */ + if (zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (~ALLOC_CMA))) { + /* + * Balance movable allocations between regular and CMA areas by + * allocating from CMA when over half of the zone's free memory + * is in the CMA area. + */ + cma_first = (zone_page_state(zone, NR_FREE_CMA_PAGES) > + zone_page_state(zone, NR_FREE_PAGES) / 2); + } else { + /* + * watermark failed means UNMOVABLE & RECLAIMBLE is not enough + * now, we should use cma first to keep them stay around the + * corresponding watermark + */ + cma_first = true; + } + return cma_first; +} +#else +static bool use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) +{ + return false; +} +#endif /* * Do the hard work of removing an element from the buddy allocator. * Call me with the zone->lock already held. @@ -2288,12 +2325,11 @@ __rmqueue(struct zone *zone, unsigned in if (IS_ENABLED(CONFIG_CMA)) { /* * Balance movable allocations between regular and CMA areas by - * allocating from CMA when over half of the zone's free memory - * is in the CMA area. + * allocating from CMA base on judging zone_watermark_ok again + * to see if the latest check got pass via the help of CMA */ if (alloc_flags & ALLOC_CMA && - zone_page_state(zone, NR_FREE_CMA_PAGES) > - zone_page_state(zone, NR_FREE_PAGES) / 2) { + use_cma_first(zone, order, alloc_flags)) { page = __rmqueue_cma_fallback(zone, order); if (page) return page; _ Patches currently in -mm which might be from zhaoyang.huang@xxxxxxxxxx are mm-optimization-on-page-allocation-when-cma-enabled.patch