> Hi Zhaoyang! > > On Fri, Apr 28, 2023 at 07:00:41PM +0800, zhaoyang.huang wrote: > > From: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx> > > > > Please be notice bellowing typical scenario that commit 168676649 > > introduce, that is, 12MB free cma pages 'help' GFP_MOVABLE to keep > > draining/fragmenting U&R page blocks until they shrink to 12MB without > > enter slowpath which against current reclaiming policy. This commit change > the criteria from hard coded '1/2' > > to watermark check which leave U&R free pages stay around WMARK_LOW > > when being fallback. > > Can you, please, explain the problem you're solving in more details? I am trying to solve a OOM problem caused by slab allocation fail as all free pages are MIGRATE_CMA by applying 168676649, which could help to reduce the fault ration from 12/20 to 2/20. I noticed it introduce the phenomenon which I describe above. > > If I understand your code correctly, you're effectively reducing the use of cma > areas for movable allocations. Why it's good? Not exactly. In fact, this commit lead to the use of cma early than it is now, which could help to protect U&R be 'stolen' by GFP_MOVABLE. Imagine this scenario, 30MB total free pages composed of 10MB CMA and 20MB U&R, while zone's watermark low is 25MB. An GFP_MOVABLE allocation can keep stealing U&R pages(don't meet 1/2 criteria) without enter slowpath(zone_watermark_ok(WMARK_LOW) is true) until they shrink to 15MB. In my opinion, it makes more sense to have CMA take its duty to help movable allocation when U&R lower to certain zone's watermark instead of when their size become smaller than CMA. > Also, this is a hot path, please, make sure you're not adding much overhead. I would like to take more thought. > > And please use scripts/checkpatch.pl next time, there are many code style > issues. ok > > Thanks! > > > > > DMA32 free:25900kB boost:0kB min:4176kB low:25856kB high:29516kB > > > > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx> > > --- > > mm/page_alloc.c | 40 ++++++++++++++++++++++++++++++++++++---- > > 1 file changed, 36 insertions(+), 4 deletions(-) > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0745aed..97768fe > > 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -3071,6 +3071,39 @@ static bool > > unreserve_highatomic_pageblock(const struct alloc_context *ac, > > > > } > > > > +#ifdef CONFIG_CMA > > +static bool __if_use_cma_first(struct zone *zone, unsigned int order, > > +unsigned int alloc_flags) { > > + unsigned long cma_proportion = 0; > > + unsigned long cma_free_proportion = 0; > > + unsigned long watermark = 0; > > + unsigned long wm_fact[ALLOC_WMARK_MASK] = {1, 1, 2}; > > + long count = 0; > > + bool cma_first = false; > > + > > + watermark = wmark_pages(zone, alloc_flags & > ALLOC_WMARK_MASK); > > + /*check if GFP_MOVABLE pass previous watermark check via the help > of CMA*/ > > + if (!zone_watermark_ok(zone, order, watermark, 0, alloc_flags & > (~ALLOC_CMA))) > > + { > > + alloc_flags &= ALLOC_WMARK_MASK; > > + /* WMARK_LOW failed lead to using cma first, this helps > U&R stay > > + * around low when being drained by GFP_MOVABLE > > + */ > > + if (alloc_flags <= ALLOC_WMARK_LOW) > > + cma_first = true; > > + /*check proportion for WMARK_HIGH*/ > > + else { > > + count = > atomic_long_read(&zone->managed_pages); > > + cma_proportion = zone->cma_pages * 100 / count; > > + cma_free_proportion = zone_page_state(zone, > NR_FREE_CMA_PAGES) * 100 > > + / zone_page_state(zone, > NR_FREE_PAGES); > > + cma_first = (cma_free_proportion >= > wm_fact[alloc_flags] * cma_proportion > > + || cma_free_proportion >= > 50); > > + } > > + } > > + return cma_first; > > +} > > +#endif > > /* > > * Do the hard work of removing an element from the buddy allocator. > > * Call me with the zone->lock already held. > > @@ -3087,10 +3120,9 @@ static bool > unreserve_highatomic_pageblock(const struct alloc_context *ac, > > * allocating from CMA when over half of the zone's free > memory > > * is in the CMA area. > > */ > > - if (alloc_flags & ALLOC_CMA && > > - zone_page_state(zone, NR_FREE_CMA_PAGES) > > > - zone_page_state(zone, NR_FREE_PAGES) / 2) { > > - page = __rmqueue_cma_fallback(zone, order); > > + if (migratetype == MIGRATE_MOVABLE) { > > + bool cma_first = __if_use_cma_first(zone, order, > alloc_flags); > > + page = cma_first ? __rmqueue_cma_fallback(zone, > > + order) : NULL; > > if (page) > > return page; > > } > > -- > > 1.9.1 > > > >