On Fri, Apr 13, 2012 at 11:31:16AM -0700, Tim Chen wrote: > > Benchmark is working on files on normal hard disk. > However, I have I have a large number > of processes (80 processes, one for each cpu), each reading > a separate mmaped file. The files are in the same directory. > That makes cache line bouncing on the counters particularly bad > due to the large number of processes running. OK, so this is with an 80 CPU machine? And when you say 20% speed up, do you mean to say we are actually being CPU constrained when reading from files on a normal hard disk? The reason why I ask this is we're seeing anything like this with Eric Whitney's 48 CPU scalability testing; we're not CPU bottlenecked, and I don't even see evidence of a larger than usual CPU utilization compared to other file systems. So still I'm trying to understand why your results are so different from what Eric has been seeing, and I'm still puzzled why this is super urgent. Ultimately, this isn't a regression and if Linus is willing to take a change at this point, I'm willing to send it --- but I really don't understand the urgency. Best regards, - Ted -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html