Re: File System Performance results

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Theodore Tso wrote:
On Mon, Oct 27, 2008 at 09:28:55AM -0500, Steven Pratt wrote:
thanks for posting the numbers.  They are definitely interesting.  On
the surface, ext4 is doing quite well overall (yay!),
Yes, that was good news. Along these lines if there is anything else we can do to help out ext4, just let us know.

Indeed, thanks Steven, for doing this work; it's much appreciated!

Couple of quick questions/suggestions

*) What version of e2fsprogs did you use for your ext4 tests,
[root@btrfs1 ~]# mkfs.ext4dev -V
mke2fs 1.41.2 (02-Oct-2008)
       Using EXT2FS Library version 1.41.2
[root@btrfs1 ~]#
and what
 if any options did you give to mke2fs when creating the filesystem?
Basically none.

'mkfs.ext4dev -F /dev/ffsbdev1'




*) For all of the filesystems except for btrfs, where you mentioned
 different mount options in use, is it fair to assume that no mount
 options were specified so the filesystem defaults were used?
Right, except where noted on btrfs, no mount options were used on any filesystem.

*) With the full knowledge that ext4 will probably not do terribly
 well with this suggested change, it would be interesting if you did a
 variant of the mail server benchmark where the benchmark code
 performed an fsync() before closing the new file, and an fsync on the
 containing directory after deleting a file, to simulate what real
 MTA's tend to do in order to assure correct SMTP semantics.  (Some
 won't do the fsync on delete, on the assumption that sending an
 e-mail twice is harmless, where as losing an e-mail is completely
 unacceptable.  In a world where 97% of the mail messages transiting
 the backbone is spam, some might disagree with those sentiments, but
 the SMTP protocol requires, and most MTA's do perform, an fsync() an
 incoming mail message before they acknowledge that they mail message
 has been accepted and they are now taking responsibility for sending
 the mail message on to its final destination. :-)
Yes, Chris Mason had already requested this.  Should have it this week.

In a 2-3 days after I get the ext4 patch queue back into shape after
2.6.28-rc2, it would interesting to get a benchmark run of 2.6.28-rc2
plus the ext4 patch queue with and without the akpm_lock_hack mount
option.  I'm not sure how much time/effort it takes to do a complete
set of benchmark runs for a different kernel version and/or mount
option, but there are definitely a number of experiments where it
would be very useful to crank through your benchmark systems, and as
long as it doesn't burden your primary benchmarking objectings overly
much, it would be great to see what those experiments run on your
benchmarking setup would turn up.
It is not much effort at all. Runs take about 3 hours per filesystem (for currently defined set of benchmarks) so we can complete full set in about 15hours. It is all automated, so assuming no breakage, about 30 minutes to set up for a new set of runs, and 30 minutes after all complete to verify and graph it all.

If there are specific runs you would like to see, just let me know specifics.

Steve
Thanks, regards,

						- Ted

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux