It seems that fragmentation is in the way of many kernel subsystems and we are adding numerous allocators to deal with the problem of getting access to contiguous memory (CMA, ION, huge page allocators, memory pools and boot memory management etc etc). Some subsystems performance already depends on contiguous physical memory (like the slub allocator that can use larger physical pages but falls back to lesser sizes if those are not available). The limited availability of higher order pages limits the performance reachable with various linux subsystemm since we are getting into issues with too many page structs to handle to manage memory (See my talk "Bazillions of pages" at the OLS in 2008). I would like to discuss approaches to dealing with this problem. - Can we reduce the number of times we create new allocators to manage the problem? Maybe have a couple that cover all the use cases and that have APIs that are expandable? - Can we generalize the existing approaches with reserving pages of a certain size (hugepages, giant pages) to an arbitrary order? F.e. would it be possible to create 64k page pools to be able to handle devices that require 64k physical blocks for full performance. - Are there any novell approaches to defragmentation? F.e. In the past I have pushed one approach that emerges from what page migration accomplishes: If all memory becomes movable then defragmentation becomes possible. I have in the past added features to make slab memory movable (via callbacks). This could be generalized an used in other allocators to ensure that memory is movable. If we can get so far as to ensure that all of memory is movable then the fragmentation problem goes away., - Maybe also an open discussion on more ways to address these issues? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>