On Sat, Oct 08, 2011 at 12:00:36PM +0800, Wu Fengguang wrote: > Hi Jan, > > The test results look not good: btrfs is heavily impacted and the > other filesystems are slightly impacted. > > I'll send you the detailed logs in private emails (too large for the > mailing list). Basically I noticed many writeback_wait traces that > never appear w/o this patch. In the btrfs cases that see larger > regressions, I see large fluctuations in the writeout bandwidth and > long disk idle periods. It's still a bit puzzling how all these > happen.. Sorry I find that part of the regressions (about 2-3%) are caused by change of my test scripts recently. Here are the more fair compares and they show only regressions in btrfs and xfs: 3.1.0-rc8-ioless6a+ 3.1.0-rc8-ioless6-requeue+ ------------------------ ------------------------ 37.34 +0.8% 37.65 thresh=100M/ext3-10dd-4k-8p-4096M-100M:10-X 44.44 +3.4% 45.96 thresh=100M/ext3-1dd-4k-8p-4096M-100M:10-X 41.70 +1.0% 42.14 thresh=100M/ext3-2dd-4k-8p-4096M-100M:10-X 46.45 -0.3% 46.32 thresh=100M/ext4-10dd-4k-8p-4096M-100M:10-X 56.60 -0.3% 56.41 thresh=100M/ext4-1dd-4k-8p-4096M-100M:10-X 54.14 +0.9% 54.63 thresh=100M/ext4-2dd-4k-8p-4096M-100M:10-X 30.66 -0.7% 30.44 thresh=1G/ext3-100dd-4k-8p-4096M-1024M:10-X 35.24 +1.6% 35.82 thresh=1G/ext3-10dd-4k-8p-4096M-1024M:10-X 43.58 +0.5% 43.80 thresh=1G/ext3-1dd-4k-8p-4096M-1024M:10-X 50.42 -0.6% 50.14 thresh=1G/ext4-100dd-4k-8p-4096M-1024M:10-X 56.23 -1.0% 55.64 thresh=1G/ext4-10dd-4k-8p-4096M-1024M:10-X 58.12 -0.5% 57.84 thresh=1G/ext4-1dd-4k-8p-4096M-1024M:10-X 45.37 +1.4% 46.03 thresh=8M/ext3-1dd-4k-8p-4096M-8M:10-X 43.71 +2.2% 44.69 thresh=8M/ext3-2dd-4k-8p-4096M-8M:10-X 35.58 +0.5% 35.77 thresh=8M/ext4-10dd-4k-8p-4096M-8M:10-X 56.39 +1.4% 57.16 thresh=8M/ext4-1dd-4k-8p-4096M-8M:10-X 51.26 +1.5% 52.04 thresh=8M/ext4-2dd-4k-8p-4096M-8M:10-X 787.25 +0.7% 792.47 TOTAL 3.1.0-rc8-ioless6a+ 3.1.0-rc8-ioless6-requeue+ ------------------------ ------------------------ 44.53 -18.6% 36.23 thresh=100M/xfs-10dd-4k-8p-4096M-100M:10-X 55.89 -0.4% 55.64 thresh=100M/xfs-1dd-4k-8p-4096M-100M:10-X 51.11 +0.5% 51.35 thresh=100M/xfs-2dd-4k-8p-4096M-100M:10-X 41.76 -4.8% 39.77 thresh=1G/xfs-100dd-4k-8p-4096M-1024M:10-X 48.34 -0.3% 48.18 thresh=1G/xfs-10dd-4k-8p-4096M-1024M:10-X 52.36 -0.2% 52.26 thresh=1G/xfs-1dd-4k-8p-4096M-1024M:10-X 31.07 -1.1% 30.74 thresh=8M/xfs-10dd-4k-8p-4096M-8M:10-X 55.44 -0.6% 55.09 thresh=8M/xfs-1dd-4k-8p-4096M-8M:10-X 47.59 -31.2% 32.74 thresh=8M/xfs-2dd-4k-8p-4096M-8M:10-X 428.07 -6.1% 401.99 TOTAL 3.1.0-rc8-ioless6a+ 3.1.0-rc8-ioless6-requeue+ ------------------------ ------------------------ 58.23 -82.6% 10.13 thresh=100M/btrfs-10dd-4k-8p-4096M-100M:10-X 58.43 -80.3% 11.54 thresh=100M/btrfs-1dd-4k-8p-4096M-100M:10-X 58.53 -79.9% 11.76 thresh=100M/btrfs-2dd-4k-8p-4096M-100M:10-X 56.55 -31.7% 38.63 thresh=1G/btrfs-100dd-4k-8p-4096M-1024M:10-X 56.11 -30.1% 39.25 thresh=1G/btrfs-10dd-4k-8p-4096M-1024M:10-X 56.21 -18.3% 45.93 thresh=1G/btrfs-1dd-4k-8p-4096M-1024M:10-X 344.06 -54.3% 157.24 TOTAL I'm now bisecting the patches to find out the root cause. Thanks, Fengguang -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html