On Fri, 7 Jun 2013, Roman Gushchin wrote: > As I understand, the idea was to make kernel allocations cheaper by reducing > the total > number of page allocations (allocating 1 page with order 3 is cheaper than > allocating > 8 1-ordered pages). Its also affecting allocator speed. By having less page structures to manage the metadata effort is reduced. By having more objects in a page the fastpath of slub is more likely to be used (Visible in allocator benchmarks). Slub can fall back dynamically to order 0 pages if necessary. So it can take opportunistically take advantage of contiguous pages. > I'm sure, it's true for recently rebooted machine with a lot of free > non-fragmented memory. But is it also true for heavy-loaded machine with > fragmented memory? Are we sure, that it's cheaper to run compaction and > allocate order 3 page than to use small 1-pages slabs? Do I miss > something? We do have defragmentation logic and defragmentation passes to address that. In general the need for larger physical contiguous memory segments will increase as RAM gets larger and larger. Maybe 2M is the next step but we will always have to face fragmentation regardless of what the next size it. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>