Changelog since v5 o Changelog clarification in patch 1 o Additional comments in patch 2 Changelog since v4 o Avoid pcp->count getting out of sync if struct page gets corrupted Changelog since v3 o Allow high-order atomic allocations to use reserves Changelog since v2 o Correct initialisation to avoid -Woverflow warning The following is two patches that implement a per-cpu cache for high-order allocations, primarily aimed at SLUB. The first patch is a bug fix that is technically unrelated but was discovered by review and so batched together. The second is the patch that implements the high-order pcpu cache. include/linux/mmzone.h | 20 +++++++- mm/page_alloc.c | 129 ++++++++++++++++++++++++++++++++----------------- 2 files changed, 103 insertions(+), 46 deletions(-) -- 2.10.2 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>