On Wed 09-11-11 00:52:07, Jan Kara wrote: > > wfg@bee /export/writeback% ./compare.rb -v jsize -e io_wkB_s thresh*/*-ioless-full-next-20111102+ thresh*/*-20111102+ > > 3.1.0-ioless-full-next-20111102+ 3.1.0-ioless-full-bg-all-next-20111102+ > > ------------------------ ------------------------ > > 36231.89 -3.8% 34855.10 thresh=1000M/ext3-100dd-4k-8p-4096M-1000M:10-X > > 41115.07 -12.7% 35886.36 thresh=1000M/ext3-10dd-4k-8p-4096M-1000M:10-X > > 48025.75 -14.3% 41146.57 thresh=1000M/ext3-1dd-4k-8p-4096M-1000M:10-X > > 47684.35 -6.4% 44644.30 thresh=1000M/ext4-100dd-4k-8p-4096M-1000M:10-X > > 54015.86 -4.0% 51851.01 thresh=1000M/ext4-10dd-4k-8p-4096M-1000M:10-X > > 55320.03 -2.6% 53867.63 thresh=1000M/ext4-1dd-4k-8p-4096M-1000M:10-X > > 37400.51 +1.6% 38012.57 thresh=100M/ext3-10dd-4k-8p-4096M-100M:10-X > > 45317.31 -4.5% 43272.16 thresh=100M/ext3-1dd-4k-8p-4096M-100M:10-X > > 40552.64 +0.8% 40884.60 thresh=100M/ext3-2dd-4k-8p-4096M-100M:10-X > > 44271.29 -5.6% 41789.76 thresh=100M/ext4-10dd-4k-8p-4096M-100M:10-X > > 54334.22 -3.5% 52435.69 thresh=100M/ext4-1dd-4k-8p-4096M-100M:10-X > > 52563.67 -6.1% 49341.84 thresh=100M/ext4-2dd-4k-8p-4096M-100M:10-X > > 45027.95 -1.0% 44599.37 thresh=10M/ext3-1dd-4k-8p-4096M-10M:10-X > > 42478.40 +0.3% 42608.48 thresh=10M/ext3-2dd-4k-8p-4096M-10M:10-X > > 35178.47 -0.2% 35103.56 thresh=10M/ext4-10dd-4k-8p-4096M-10M:10-X > > 54079.64 -0.5% 53834.85 thresh=10M/ext4-1dd-4k-8p-4096M-10M:10-X > > 49982.11 -0.4% 49803.44 thresh=10M/ext4-2dd-4k-8p-4096M-10M:10-X > > 783579.17 -3.8% 753937.28 TOTAL io_wkB_s > Here I can see some noticeable drops in the realistic thresh=100M case > (case thresh=1000M is unrealistic but it still surprise me that there are > drops as well). I'll try to reproduce your results so that I can look into > this more effectively. So I've run a test on a machine with 1G of memory, thresh=184M (so something similar to your 4G-1G test). I've used tiobench using 10 threads, each thread writing 1.6G file. I have run the test 10 times to get an idea of fluctuations. The result is: without patch with patch AVG STDDEV AVG STDDEV 199.884820 +- 1.32268 200.466003 +- 0.377405 The numbers are time-to-completion so lower is better. Summary is: No statistically meaningful difference. I'll run more tests with different dirty thresholds to see whether I won't be able to observe some difference... Honza -- Jan Kara <jack@xxxxxxx> SUSE Labs, CR -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html