On Fri, Apr 13, 2012 at 02:37:22PM -0400, Ted Ts'o wrote: > On Fri, Apr 13, 2012 at 11:31:16AM -0700, Tim Chen wrote: > > > > Benchmark is working on files on normal hard disk. > > However, I have I have a large number > > of processes (80 processes, one for each cpu), each reading > > a separate mmaped file. The files are in the same directory. > > That makes cache line bouncing on the counters particularly bad > > due to the large number of processes running. > > OK, so this is with an 80 CPU machine? 4 sockets, 40 cores, 80 threads. > > And when you say 20% speed up, do you mean to say we are actually > being CPU constrained when reading from files on a normal hard disk? The files are in memory, but we're still CPU constrained due to various other issues. > The reason why I ask this is we're seeing anything like this with Eric > Whitney's 48 CPU scalability testing; we're not CPU bottlenecked, and > I don't even see evidence of a larger than usual CPU utilization > compared to other file systems. I bet Eric didn't test with this statistic counter. > Ultimately, this isn't a regression and if Linus is willing to take a The old kernel didn't have that problem, so it's a regression. > change at this point, I'm willing to send it --- but I really don't > understand the urgency. If we don't fix performance regressions before each release then Linux will get slower and slower. At least I don't want a slow Linux. -Andi -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html