On Sun, 9 Sep 2012, David Cohen wrote: > Requesting pages with order > 0 is faster than requesting a single > page 20k times if memory isn't fragmented. But in case memory is > fragmented, at some point order > 0 may not be available and page > allocation process go through more expensive path, which ends up being > slower than requesting 20k single pages. I'd like to have a way to > choose faster option depending on fragmentation scenario. > Is there currently a reliable solution for this case? Couldn't find one. > If the answer is really "no", what does it sound like to implement a > function e.g. alloc_pages_try_orders(mask, min_order, max_order). I don't think that's generally useful, so it would have to be isolated to the driver you're working on. But what I would suggest would be to avoid doing memory compaction and reclaim on higher orders and rather fallback to allocating smaller and smaller orders first. Try using fragmentation_index() and determine the optimal order to allocate depending on the current state of fragmentation; if that's insufficient, then you'll have to fallback to using memory compaction. You'll want to compact much more than a single order-9 page allocation, though, so perhaps explicitly trigger compact_node() beforehand and try to incur the penalty only once. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>