On Mon, 2010-04-12 at 14:26 -0400, Martin K. Petersen wrote: > >>>>> "James" == James Bottomley <James.Bottomley@xxxxxxx> writes: > > >> Correct. It's quite unlikely for pages to be contiguous so this is > >> the best we can do. > > James> Actually, average servers do about 50% contiguous on average > James> since we changed the mm layer to allocate in ascending physical > James> page order ... this figure is highly sensitive to mm changes > James> though, and can vary from release to release. > > Interesting. When did this happen? The initial work was done by Bill Irwin, years ago. For a while it was good, but then after Mel Gorman did the page reclaim code, we became highly sensitive to the reclaim algorithms for this, so it's fluctuated a bit ever since. Even with all this, the efficiency is highly dependent on the amount of free memory: once the machine starts running to exhaustion (excluding page cache, since that usually allocates correctly to begin with) the contiguity really drops. > Last time I gathered data on segment merge efficiency (1 year+ ago) I > found that adjacent pages were quite rare for a normal fs type workload. > Certainly not in the 50% ballpark. I'll take another look when I have a > moment... I got 60% with an I/O bound test with about a gigabyte of free memory a while ago (2.6.31, I think). Even for machines approaching memory starvation, 30% seems achievable. James -- To unsubscribe from this list: send the line "unsubscribe linux-ide" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html