On Mon, May 22, 2023 at 02:36:03PM +0800, zhaoyang.huang wrote: > +#ifdef CONFIG_CMA > +/* > + * It is waste of effort to scan and reclaim CMA pages if it is not available > + * for current allocation context > + */ > +static bool skip_cma(struct folio *folio, struct scan_control *sc) > +{ > + if (!current_is_kswapd() && > + gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE && > + get_pageblock_migratetype(&folio->page) == MIGRATE_CMA) > + return true; > + return false; > +} > +#else > +static bool skip_cma(struct folio *folio, struct scan_control *sc) > +{ > + return false; > +} > +#endif > + > /* > * Isolating page from the lruvec to fill in @dst list by nr_to_scan times. > * > @@ -2239,7 +2259,8 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan, > nr_pages = folio_nr_pages(folio); > total_scan += nr_pages; > > - if (folio_zonenum(folio) > sc->reclaim_idx) { > + if (folio_zonenum(folio) > sc->reclaim_idx || > + skip_cma(folio, sc)) { > nr_skipped[folio_zonenum(folio)] += nr_pages; > move_to = &folios_skipped; > goto move; I have no idea if what this patch is trying to accomplish is correct, but I no longer object to how it is doing it.