On Sat, 2020-03-07 at 14:38 -0800, Andrew Morton wrote: > On Fri, 6 Mar 2020 15:01:02 -0500 Rik van Riel <riel@xxxxxxxxxxx> > wrote: > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -2711,6 +2711,18 @@ __rmqueue(struct zone *zone, unsigned int > > order, int migratetype, > > { > > struct page *page; > > > > + /* > > + * Balance movable allocations between regular and CMA areas by > > + * allocating from CMA when over half of the zone's free memory > > + * is in the CMA area. > > + */ > > + if (migratetype == MIGRATE_MOVABLE && > > + zone_page_state(zone, NR_FREE_CMA_PAGES) > > > + zone_page_state(zone, NR_FREE_PAGES) / 2) { > > + page = __rmqueue_cma_fallback(zone, order); > > + if (page) > > + return page; > > + } > > retry: > > page = __rmqueue_smallest(zone, order, migratetype); > > if (unlikely(!page)) { > > __rmqueue() is a hot path (as much as any per-page operation can be a > hot path). What is the impact here? That is a good question. For MIGRATE_MOVABLE allocations, most allocations seem to be order 0, which go through the per cpu pages array, and rmqueue_pcplist, or be order 9. For order 9 allocations, other things seem likely to dominate the allocation anyway, while for order 0 allocations the pcp list should take away the sting? What I do not know is how much impact this change would have on other allocations, like order 3 or order 4 network buffer allocations from irq context... Are there cases in particular that we should be testing? -- All Rights Reversed.
Attachment:
signature.asc
Description: This is a digitally signed message part