On Wed, Jul 02, 2014 at 08:38:17AM +1000, Dave Chinner wrote: > On Tue, Jul 01, 2014 at 07:39:15PM +0100, Mel Gorman wrote: > > On Tue, Jul 01, 2014 at 01:16:11PM -0400, Johannes Weiner wrote: > > > On Mon, Jun 30, 2014 at 05:47:59PM +0100, Mel Gorman wrote: > > > Seqread throughput is up, randread takes a small hit. But allocation > > > latency is badly screwed at higher concurrency levels: > > > > So the results are roughly similar. You don't state which filesystem it is > > but FWIW if it's the ext3 filesystem using the ext4 driver then throughput > > at higher levels is also affected by filesystem fragmentation. The problem > > was outside the scope of the series. > > I'd suggest you're both going wrong that the "using ext3" point. > > Use ext4 or XFS for your performance measurements because that's > what everyone is using for the systems these days. iNot to mention > they don'thave all the crappy allocation artifacts that ext3 has, > nor the throughput limitations caused by the ext3 journal, and so > on. > > Fundamentally, ext3 performance is simply not a relevant performance > metric anymore - it's a legacy filesystem in maintenance mode and > has been for a few years now... > The problem crosses filesystems. ext3 is simply the first in the queue because by and large it behaved the worst. Covering the rest of them simply takes more time and with different results as you may expect. Here are the xfs results for the smaller of the machines as it was able to get that far before it got reset 3.16.0-rc2 3.0.0 3.16.0-rc2 vanilla vanilla fairzone-v4 Min SeqRead-MB/sec-1 92.69 ( 0.00%) 99.68 ( 7.54%) 104.47 ( 12.71%) Min SeqRead-MB/sec-2 106.81 ( 0.00%) 123.43 ( 15.56%) 123.24 ( 15.38%) Min SeqRead-MB/sec-4 101.89 ( 0.00%) 113.78 ( 11.67%) 116.85 ( 14.68%) Min SeqRead-MB/sec-8 95.31 ( 0.00%) 91.40 ( -4.10%) 101.68 ( 6.68%) Min SeqRead-MB/sec-16 81.84 ( 0.00%) 88.53 ( 8.17%) 86.63 ( 5.85%) -- Mel Gorman SUSE Labs -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html