On Mon, Jan 02, 2012 at 11:06:26AM +0100, Yann Dupont wrote: > Le 22/12/2011 12:02, Yann Dupont a écrit : > >Le 22/12/2011 10:23, Yann Dupont a écrit : > >> > >>>Can you run a block trace on both kernels (for say five minutes) > >>>when the load differential is showing up and provide that to us so > >>>we can see how the IO patterns are differing? > > > > > >here we go. > > > > Hello, happy new year everybody , > > Did someone had time to examine the 2 blktrace ? (and, by chance, > can see the root cause of the increased load ?) I've had a bit of a look, but most peopl ehave been on holidays. As it is, I can't see any material difference between the traces. both reads and writes are taking the same amount of time to service, so I don't think there's any problem here. I do recall that some years ago that we changed one of the ways we slept in XFS which meant those blocked IOs contributed to load average (as tehy are supposed to). That meant that more IO contributed to the load average (it might have been read related), so load averages were then higher for exactly the same workloads. Indeed: load average: 0.64, 0.15, 0.09 (start 40 concurrent directory traversals w/ unlinks) (wait a bit) load average: 39.96, 23.75, 10.06 Yup, that is spot on - 40 processes doing blocking IO..... So absent any measurable performance problem, I don't think the change in load average is something to be concerned about. Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs