Hi, I work with embedded Linux, but new to linux MM community. I need a way to improve performance when allocating a high number of pages. Can't describe the exact scenario, but need to request more than 20k pages on a time-sensitive task. Requesting pages with order > 0 is faster than requesting a single page 20k times if memory isn't fragmented. But in case memory is fragmented, at some point order > 0 may not be available and page allocation process go through more expensive path, which ends up being slower than requesting 20k single pages. I'd like to have a way to choose faster option depending on fragmentation scenario. Is there currently a reliable solution for this case? Couldn't find one. If the answer is really "no", what does it sound like to implement a function e.g. alloc_pages_try_orders(mask, min_order, max_order). The idea would be to try to get from free list (faster path only) page with order <= max_order and > order_min (the higher is preferable) and allow slow path only if min_order is the only option. Thanks for your time, David Cohen -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>