On Fri, Sep 03, 2010 at 08:21:01PM -0700, Andrew Morton wrote: > On Sat, 4 Sep 2010 12:25:45 +1000 Dave Chinner <david@xxxxxxxxxxxxx> wrote: > > > Still, given the improvements in performance from this patchset, > > I'd say inclusion is a no-braniner.... > > OK, thanks. > > It'd be interesting to check the IPI frequency with and without - > /proc/interrupts "CAL" field. Presumably it went down a lot. Maybe I suspected you would ask for this. I happened to dump /proc/interrupts after the livelock run finished, so you're in luck :) The lines below are: before: before running the single 50M inode create workload after: the numbers after the run completes livelock: the numbers after two runs with a livelock in the second Vanilla 2.6.36-rc3: before: 561 350 614 282 559 335 365 363 after: 10472 10473 10544 10681 9818 10837 10187 9923 .36-rc3 With patchset: before: 452 426 441 337 748 321 498 357 after: 9463 9112 8671 8830 9391 8684 9768 8971 The numbers aren't that different - roughly 10% lower on average with the patchset. I will state that vanilla kernel runs I ijust did had noticably more consistent performance than the previous results I had acheived, so perhaps it wasn't triggering the livelock conditions as effectively this time through. And finally: livelock: 59458 58367 58559 59493 59614 57970 59060 58207 So the livelock case tends to indicate roughly 40,000 more IPI interrupts per CPU occurred. The livelock occurred for close to 5 minutes, so that's roughly 130 IPIs per second per CPU.... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>