On Tue, Oct 17, 2023 at 6:40 AM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote: > > On Mon, 16 Oct 2023 15:12:45 +0800 "zhaoyang.huang" <zhaoyang.huang@xxxxxxxxxx> wrote: > > > From: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx> > > > > According to current CMA utilization policy, an alloc_pages(GFP_USER) > > could 'steal' UNMOVABLE & RECLAIMABLE page blocks via the help of > > CMA(pass zone_watermark_ok by counting CMA in but use U&R in rmqueue), > > which could lead to following alloc_pages(GFP_KERNEL) fail. > > Solving this by introducing second watermark checking for GFP_MOVABLE, > > which could have the allocation use CMA when proper. > > > > -- Free_pages(30MB) > > | > > | > > -- WMARK_LOW(25MB) > > | > > -- Free_CMA(12MB) > > | > > | > > -- > > > > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx> > > --- > > v6: update comments > > The patch itself is identical to the v5 patch. So either you meant > "update changelog" above or you sent the wrong diff? sorry, should be update changelog > > Also, have we resolved any concerns regarding possible performance > impacts of this change? I don't think this commit could introduce performance impact as it actually adds one more path for using CMA page blocks in advance. __rmqueue(struct zone *zone, unsigned int order, int migratetype, if (IS_ENABLED(CONFIG_CMA)) { if (alloc_flags & ALLOC_CMA && - zone_page_state(zone, NR_FREE_CMA_PAGES) > - zone_page_state(zone, NR_FREE_PAGES) / 2) { + use_cma_first(zone, order, alloc_flags)) { //current '1/2' logic is kept while add a path for using CMA in advance than now. page = __rmqueue_cma_fallback(zone, order); if (page) return page; } } retry: //normal rmqueue_smallest path is not affected which could deemed as a fallback path for __rmqueue_cma_fallback failure page = __rmqueue_smallest(zone, order, migratetype); if (unlikely(!page)) { if (alloc_flags & ALLOC_CMA) page = __rmqueue_cma_fallback(zone, order); if (!page && __rmqueue_fallback(zone, order, migratetype, alloc_flags)) goto retry; } return page; }