On Tue, May 02, 2023 at 12:12:28PM +0000, 黄朝阳 (Zhaoyang Huang) wrote: > > Hi Zhaoyang! > > > > On Fri, Apr 28, 2023 at 07:00:41PM +0800, zhaoyang.huang wrote: > > > From: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx> > > > > > > Please be notice bellowing typical scenario that commit 168676649 > > > introduce, that is, 12MB free cma pages 'help' GFP_MOVABLE to keep > > > draining/fragmenting U&R page blocks until they shrink to 12MB without > > > enter slowpath which against current reclaiming policy. This commit change > > the criteria from hard coded '1/2' > > > to watermark check which leave U&R free pages stay around WMARK_LOW > > > when being fallback. > > > > Can you, please, explain the problem you're solving in more details? > I am trying to solve a OOM problem caused by slab allocation fail as all free pages are MIGRATE_CMA by applying 168676649, which could help to reduce the fault ration from 12/20 to 2/20. I noticed it introduce the phenomenon which I describe above. > > > > If I understand your code correctly, you're effectively reducing the use of cma > > areas for movable allocations. Why it's good? > Not exactly. In fact, this commit lead to the use of cma early than it is now, which could help to protect U&R be 'stolen' by GFP_MOVABLE. Imagine this scenario, 30MB total free pages composed of 10MB CMA and 20MB U&R, while zone's watermark low is 25MB. An GFP_MOVABLE allocation can keep stealing U&R pages(don't meet 1/2 criteria) without enter slowpath(zone_watermark_ok(WMARK_LOW) is true) until they shrink to 15MB. In my opinion, it makes more sense to have CMA take its duty to help movable allocation when U&R lower to certain zone's watermark instead of when their size become smaller than CMA. > > Also, this is a hot path, please, make sure you're not adding much overhead. > I would like to take more thought. Got it, thank you for the explanation! How about the following approach (completely untested)? diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6da423ec356f..4b50f497c09d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2279,12 +2279,13 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, if (IS_ENABLED(CONFIG_CMA)) { /* * Balance movable allocations between regular and CMA areas by - * allocating from CMA when over half of the zone's free memory - * is in the CMA area. + * allocating from CMA when over half of the zone's easily + * available free memory is in the CMA area. */ if (alloc_flags & ALLOC_CMA && zone_page_state(zone, NR_FREE_CMA_PAGES) > - zone_page_state(zone, NR_FREE_PAGES) / 2) { + (zone_page_state(zone, NR_FREE_PAGES) - + zone->_watermark[WMARK_LOW]) / 2) { page = __rmqueue_cma_fallback(zone, order); if (page) return page; Basically the idea is to keep free space equally split between cma and non-cma areas. Will it work for you? Thanks!