On 3/6/20 9:01 PM, Rik van Riel wrote: > Posting this one for Roman so I can deal with any upstream feedback and > create a v2 if needed, while scratching my head over the next piece of > this puzzle :) > > ---8<--- > > From: Roman Gushchin <guro@xxxxxx> > > Currently a cma area is barely used by the page allocator because > it's used only as a fallback from movable, however kswapd tries > hard to make sure that the fallback path isn't used. Few years ago Joonsoo wanted to fix these kinds of weird MIGRATE_CMA corner cases by using ZONE_MOVABLE instead [1]. Unfortunately it was reverted due to unresolved bugs. Perhaps the idea could be resurrected now? [1] https://lore.kernel.org/linux-mm/1512114786-5085-1-git-send-email-iamjoonsoo.kim@xxxxxxx/ > This results in a system evicting memory and pushing data into swap, > while lots of CMA memory is still available. This happens despite the > fact that alloc_contig_range is perfectly capable of moving any movable > allocations out of the way of an allocation. > > To effectively use the cma area let's alter the rules: if the zone > has more free cma pages than the half of total free pages in the zone, > use cma pageblocks first and fallback to movable blocks in the case of > failure. > > Signed-off-by: Rik van Riel <riel@xxxxxxxxxxx> > Co-developed-by: Rik van Riel <riel@xxxxxxxxxxx> > Signed-off-by: Roman Gushchin <guro@xxxxxx> > --- > mm/page_alloc.c | 12 ++++++++++++ > 1 file changed, 12 insertions(+) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 3c4eb750a199..0fb3c1719625 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -2711,6 +2711,18 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, > { > struct page *page; > > + /* > + * Balance movable allocations between regular and CMA areas by > + * allocating from CMA when over half of the zone's free memory > + * is in the CMA area. > + */ > + if (migratetype == MIGRATE_MOVABLE && > + zone_page_state(zone, NR_FREE_CMA_PAGES) > > + zone_page_state(zone, NR_FREE_PAGES) / 2) { > + page = __rmqueue_cma_fallback(zone, order); > + if (page) > + return page; > + } > retry: > page = __rmqueue_smallest(zone, order, migratetype); > if (unlikely(!page)) { >