a comparison of ext3, jfs, and xfs on hardware raid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm setting up a new file server and I just can't seem to get the
expected performance from ext3.  Unfortunately I'm stuck with ext3 due
to my use of Lustre.  So I'm hoping you dear readers will send me some
tips for increasing ext3 performance.

The system is using an Areca hardware raid controller with 5 7200RPM
SATA disks.  The RAID controller has 128MB of cache and the disks each
have 8MB.  The cache is write-back.  The system is Linux 2.6.12 on amd64
with 1GB system memory.

Using bonnie++ with a 10GB fileset, in MB/s:

         ext3    jfs    xfs
Read     112     188    141
Write     97     157    167
Rewrite   51      71     60

These number were obtained using the mkfs defaults for all filesystems
and the deadline scheduler.  As you can see JFS is kicking butt on this
test.

Next I used pgbench to test parallel random I/O.  pgbench has
configurable number of clients and transactions per client, and can
change the size of its database.  I used a database of 100 million
tuples (scale factor 1000).  I times 100,000 transactions on each
filesystem, with 10 and 100 clients per run.  Figures are in
transactions per second.


              ext3  jfs  xfs
10 Clients      55   81   68
100 Clients     61  100   64

Here XFS is not substantially faster but JFS continues to lead.  

JFS is roughly 60% faster than ext3 on pgbench and 40-70% faster on
bonnie++ linear I/O.

Are there any tunables that I might want to adjust to get better
performance from ext3?

-jwb

_______________________________________________

Ext3-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/ext3-users

[Index of Archives]         [Linux RAID]     [Kernel Development]     [Red Hat Install]     [Video 4 Linux]     [Postgresql]     [Fedora]     [Gimp]     [Yosemite News]

  Powered by Linux