Re: cpio to ext4 seems much slower than to ext2, ext3 or xfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Richard W.M. Jones wrote:
On Thu, Nov 12, 2009 at 09:54:12AM +0000, Daniel P. Berrange wrote:
On Wed, Nov 11, 2009 at 09:05:20PM +0000, Richard W.M. Jones wrote:
On Wed, Nov 11, 2009 at 01:24:20PM -0600, Eric Sandeen wrote:
Anybody got actual numbers? I don't disagree that mkfs.ext4 is slow in the default config, but I don't think it should be slower than mkfs.ext3 for the same sized disks.
Easy with guestfish:

  $ guestfish --version
  guestfish 1.0.78
  $ for fs in ext2 ext3 ext4 xfs jfs ; do guestfish sparse /tmp/test.img 10G : run : echo $fs : sfdiskM /dev/sda , : time mkfs $fs /dev/sda1 ; done
  ext2
  elapsed time: 5.21 seconds
  ext3
  elapsed time: 7.87 seconds
  ext4
  elapsed time: 6.10 seconds
  xfs
  elapsed time: 0.45 seconds
  jfs
  elapsed time: 0.78 seconds

Note that because this is using a sparsely allocated disk each write
to the virtual disk is very slow.  Change 'sparse' to 'alloc' to test
this with a non-sparse file-backed disk.
You really want to avoid using sparse files at all when doing any kind of
benchmark / performance tests in VMs. The combo of a sparse file store on
a journalling filesystem in the host, w/ virt can cause very pathelogically
bad I/O performance until the file has all its extents fully allocated on
the host FS. So the use of a sparse file may well be exagarating the real
difference in elapsed time between these different mkfs calls in the guest.

Again, this time backed by a 10 GB logical volume in the host, so this
should remove pretty much all host effects:

$ for fs in ext2 ext3 ext4 xfs jfs reiserfs nilfs2 ntfs msdos btrfs hfs hfsplus gfs gfs2 ; do guestfish add /dev/mapper/vg_trick-Temp : run : zero /dev/sda : echo $fs : sfdiskM /dev/sda , : time mkfs $fs /dev/sda1 ; done


ext2
elapsed time: 3.48 seconds

ext3
elapsed time: 5.45 seconds
ext4
elapsed time: 5.19 seconds

so here we have ext4 slightly faster, which was the original question... ;)

(dropping caches in between might be best, too...)

xfs
elapsed time: 0.35 seconds
jfs
elapsed time: 0.66 seconds
reiserfs
elapsed time: 0.73 seconds
nilfs2
elapsed time: 0.19 seconds
ntfs
elapsed time: 2.33 seconds
msdos
elapsed time: 0.29 seconds
btrfs
elapsed time: 0.16 seconds
hfs
elapsed time: 0.44 seconds
hfsplus
elapsed time: 0.46 seconds
gfs
elapsed time: 1.60 seconds
gfs2
elapsed time: 3.98 seconds

I'd like to repeat my proviso: I think this test is meaningless for
most users.

Until users have 8TB raids at home, which is not really that far off ...

-Eric

Rich.


--
fedora-devel-list mailing list
fedora-devel-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/fedora-devel-list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]
  Powered by Linux