As OSDL is developing tests to test individual functional pieces of code associated with memory hotplug we need to explicitly test memory fragmentation. Mel Gorman, the original author of memory fragmentation, mentioned a testing methodology I think we should use as the basis for an OSDL test. I've included a link to the original email and excerpted the relevant text below. ------------------------------- From http://lkml.org/lkml/2005/5/31/68 The last test is to show that the allocator can satisfy more high-order allocations, especially under load, than the standard allocator. The test performs the following; 1. Start updatedb running in the background 2. Load kernel modules that tries to allocate high-order blocks on demand 3. Clean a kernel tree 4. Make 6 copies of the tree. As each copy finishes, a compile starts at -j4 5. Start compiling the primary tree 6. Sleep 3 minutes while the 7 trees are being compiled 7. Use the kernel module to attempt 160 times to allocate a 2^10 block of pages - note, it only attempts 160 times, no matter how often it succeeds - An allocation is attempted every 1/10th of a second - Performance will get badly shot as it forces consider amounts of pageout The result of the allocations under load (load averaging 25) were; 2.6.12-rc5 Standard Order: 10 Attempted allocations: 160 Success allocs: 3 Failed allocs: 108 % Success: 1 2.6.12-rc4 MBuddy V12 Order: 10 Attempted allocations: 160 Success allocs: 63 Failed allocs: 97 % Success: 39 It is important to note that the standard allocator invoked the out-of-memory killer so often that it killed almost all available processes including X, sshd and all instances of make and gcc. The patch with the placement policy never invoked the OOM killer. The downside of the mbuddy allocator is that it takes a long time for it to free up the MAX_ORDER sized pages as pages are freed in LRU order.