Re: More testing: 4x parallel 2G writes, sequential reads

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Nov 07, 2007  16:42 -0600, Eric Sandeen wrote:
> I tried ext4 vs. xfs doing 4 parallel 2G IO writes in 1M units to 4
> different subdirectories of the root of the filesystem:
> 
> http://people.redhat.com/esandeen/seekwatcher/ext4_4_threads.png
> http://people.redhat.com/esandeen/seekwatcher/xfs_4_threads.png
> http://people.redhat.com/esandeen/seekwatcher/ext4_xfs_4_threads.png
> 
> and then read them back sequentially:
> 
> http://people.redhat.com/esandeen/seekwatcher/ext4_4_threads_read.png
> http://people.redhat.com/esandeen/seekwatcher/xfs_4_threads_read.png
> http://people.redhat.com/esandeen/seekwatcher/ext4_xfs_4_read_threads.png
> 
> At the end of the write, ext4 had on the order of 400 extents/file, xfs
> had on the order of 30 extents/file.  It's clear especially from the
> read graph that ext4 is interleaving the 4 files, in about 5M chunks on
> average.  Throughput seems comparable between ext4 & xfs nonetheless.

The question is what the "best" result is for this kind of workload?
In HPC applications the common case is that you will also have the data
files read back in parallel instead of serially.

The test shows ext4 finishing marginally faster in the write case, and
marginally slower in the read case.  What happens if you have 4 parallel
readers?

Cheers, Andreas
--
Andreas Dilger
Sr. Software Engineer, Lustre Group
Sun Microsystems of Canada, Inc.

-
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux