[...] > unsigned long > isolate_freepages_range(struct compact_control *cc, > - unsigned long start_pfn, unsigned long end_pfn); > + unsigned long start_pfn, unsigned long end_pfn, > + struct list_head *freepage_list); > unsigned long > isolate_migratepages_range(struct compact_control *cc, > unsigned long low_pfn, unsigned long end_pfn); > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index caf393d8b413..cdf956feae80 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -8402,10 +8402,14 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, > } > > static int __alloc_contig_range(unsigned long start, unsigned long end, > - unsigned migratetype, gfp_t gfp_mask) > + unsigned int migratetype, gfp_t gfp_mask, > + unsigned int alloc_order, > + struct list_head *freepage_list) I have to say that this interface gets really ugly, especially as you add yet another (questionable to me) parameter in the next patch. I don't like that. It feels like your trying to squeeze a very specific behavior into a fairly simple and basic range allocator (well, it's complicated stuff, but the interface is at least fairly simple). Something like that should be handled on a higher level if possible. And similar to Matthew, I am not sure if working on PFN ranges is actually what we want here. You only want *some* order-4 pages in your driver and identified performance issues when using CMA for the individual allocations. Now you convert the existing range allocator API into a "allocate something within something" but don't even show how that one would be used within CMA to speed up stuff. I still wonder if there isn't an easier approach to achieve what you want, speeding up CMA allocations on the one hand, and dealing with temporarily unmovable pages on the other hand. Any experts around? -- Thanks, David / dhildenb