On Wed, Apr 19, 2023 at 2:07 PM Huang, Ying <ying.huang@xxxxxxxxx> wrote: > > "zhaoyang.huang" <zhaoyang.huang@xxxxxxxxxx> writes: > > > From: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx> > > > > It is wasting of effort to reclaim CMA pages if they are not availabe > > for current context during direct reclaim. Skip them when under corresponding > > circumstance. > > Do you have any performance number for this change? Sorry, No. This patch arised from bellowing OOM issue which is caused by MIGRATE_CMA occupying almost 100 percent of zones free pages and solved by "168676649 mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations". This could be a common scenario for a zone that has a large proportion of CMA reserved page blocks which need to be considered in both allocation and reclaiming perspective. 04166 < 4> [ 36.172486] [03-19 10:05:52.172] ActivityManager: page allocation failure: order:0, mode:0xc00(GFP_NOIO), nodemask=(null),cpuset=foreground,mems_allowed=0 0419C < 4> [ 36.189447] [03-19 10:05:52.189] DMA32: 0*4kB 447*8kB (C) 217*16kB (C) 124*32kB (C) 136*64kB (C) 70*128kB (C) 22*256kB (C) 3*512kB (C) 0*1024kB 0*2048kB 0*4096kB = 35848kB 0419D < 4> [ 36.193125] [03-19 10:05:52.193] Normal: 231*4kB (UMEH) 49*8kB (MEH) 14*16kB (H) 13*32kB (H) 8*64kB (H) 2*128kB (H) 0*256kB 1*512kB (H) 0*1024kB 0*2048kB 0*4096kB = 3236kB ...... 041EA < 4> [ 36.234447] [03-19 10:05:52.234] SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC) 041EB < 4> [ 36.234455] [03-19 10:05:52.234] cache: ext4_io_end, object size: 64, buffer size: 64, default order: 0, min order: 0 041EC < 4> [ 36.234459] [03-19 10:05:52.234] node 0: slabs: 53, objs: 3392, free: 0 > > Best Regards, > Huang, Ying > > > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx> > > --- > > mm/vmscan.c | 11 ++++++++++- > > 1 file changed, 10 insertions(+), 1 deletion(-) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index bd6637f..04424d9 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -2225,10 +2225,16 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan, > > unsigned long nr_skipped[MAX_NR_ZONES] = { 0, }; > > unsigned long skipped = 0; > > unsigned long scan, total_scan, nr_pages; > > + bool cma_cap = true; > > + struct page *page; > > LIST_HEAD(folios_skipped); > > > > total_scan = 0; > > scan = 0; > > + if ((IS_ENABLED(CONFIG_CMA)) && !current_is_kswapd() > > + && (gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE)) > > + cma_cap = false; > > + > > while (scan < nr_to_scan && !list_empty(src)) { > > struct list_head *move_to = src; > > struct folio *folio; > > @@ -2239,7 +2245,10 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan, > > nr_pages = folio_nr_pages(folio); > > total_scan += nr_pages; > > > > - if (folio_zonenum(folio) > sc->reclaim_idx) { > > + page = &folio->page; > > + > > + if (folio_zonenum(folio) > sc->reclaim_idx || > > + (get_pageblock_migratetype(page) == MIGRATE_CMA && !cma_cap)) { > > nr_skipped[folio_zonenum(folio)] += nr_pages; > > move_to = &folios_skipped; > > goto move;