On Mon, 2010-03-29 at 11:10 -0400, Greg Freemyer wrote: > On Mon, Mar 29, 2010 at 2:25 AM, Keith Mannthey <kmannth@xxxxxxxxxx> wrote: > > > > > > After 2.6.30 I am seeing large performance regressions on a raid setup. > > I am working to publish a larger amount of data but I wanted to get some > > quick data out about what I am seeing. > > > > Is mdraid involved? > > They added barrier support for some configs after 2.6.30 I believe. > It can cause a drastic perf change, but it increases reliability and > is "correct". lvm and device mapper are is involved. The git bisect just took me to: 374bf7e7f6cc38b0483351a2029a97910eadde1b is first bad commit commit 374bf7e7f6cc38b0483351a2029a97910eadde1b Author: Mikulas Patocka <mpatocka@xxxxxxxxxx> Date: Mon Jun 22 10:12:22 2009 +0100 dm: stripe support flush Flush support for the stripe target. This sets ti->num_flush_requests to the number of stripes and remaps individual flush requests to the appropriate stripe devices. Signed-off-by: Mikulas Patocka <mpatocka@xxxxxxxxxx> Signed-off-by: Alasdair G Kergon <agk@xxxxxxxxxx> :040000 040000 542f4b9b442d1371c6534f333b7e00714ef98609 d490479b660139fc1b6b0ecd17bb58c9e00e597e M drivers This may be correct behavior but the performance penalty in this test case is pretty high. I am going to move back to current kernels and starting looking into ext4/dm flushing. Thanks, Keith Mannthey -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html