ext3 on Linux software RAID1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Everyone,

We just had a pretty bad crash on one of production boxes and the ext2
filesystem on the data partition of our box had some major filesystem
corruption.  Needless to say, I am now looking into converting the
filesystem to ext3 and I have some questions regarding ext3 and Linux
software RAID.

I have read that previously there were some issues running ext3 on a
software raid device (/dev/mdN), but that most of those issues are resolved
by running kernel 2.4.x.  Currently we are running 2.4.16 on our producton
system and we have a rather complicated hardware/software RAID configuration
on the box.

Now for the details of my system.  See
http://w3.one.net/~djflux/graphics/raiddiag.png for a graphic of our RAID
configuration.  We have 2 Dell PowerVault 220S filled with 15K 18GB SCSI
drives.  Each drive in  PowerVault 1 is hardware mirrored to the
corresponding drive in PowerVault 2.  I then use Linux software RAID0 to
create a stripe across these 12 drives (/dev/md0).  This setup is kind of
convoluted due to hardware restraints (a Dell PERC3QC RAID card can only
span [RAID10] 8 drives and we wanted 12).  

Internal to the box I have 7 (10K 36GB) SCSI disks and a hardware stripe
(RAID0, /dev/sdb1).  I then use Linux software RAID1 to mirror this drive
with the software RAID0 creating /dev/md1.  I know I'm only using a portion
of the full space on /dev/sdb1, but it is hoped to use it all at some later
date.

There is an ext2 filesystem on /dev/md1 that is used for the Informix/IBM
database called UniVerse.  The reason for this RAID configuration is to have
a static copy of data to be used for backups.  I suspend database operations
long enough to use mdctl to fail and remove /dev/sdb1 out of /dev/md1.  I
can then backup the static database data knowing that it is a valid
point-in-time snapshot of my database.  Tar is used to archive this drive to
tape and then it is hotadded back in to /dev/md1 for resync after the tar
archive completes.

The box is a Dell PowerEdge 6400 with 4 700MHz Xeon, and 8GB of RAM.  The
box hosts approximately 450 users during the average business day.  The
database is currently about 70GB and the partition on /dev/md1 is about
200GB.  The database has a few large files that the majority of system users
access very frequently, mostly for reads, but also for updates.  We want the
highest level of integrity for our data, but do not want to impact the
interactivity of the machine very much.  Current system load averages range
from 0.33 to 3.50 and occasionally spiking higher.

Now that you know the basics of my system and our ideal requirements, I have
a few questions:  

- Is it wise to convert the filesystem on /dev/md1 to ext3? 

- Have the issues with ext3 on Linux RAID been resolved?

- Will the failing and resyncing of /dev/md1 happening on a daily basis
cause problems with the journalling? 

- Do you think the filesystem would be stable enough for 18x7 availability?


- What kind of overhead is involved after the filesystem is ext3?  

- What journalling mode is suggested for this type of application/system
configuration?

- What size journal would be appropriate give data=ordered vs. data=journal?


- And any other suggestions/insights/comments.

Below is our /etc/raidtab.  Let me know if you need any more information.
Thank you in advance for all your assistance.

Regards,
Andrew Rechenberg
Network Team, Sherman Financial Group
arechenberg@shermanfinancialgroup.com


raiddev /dev/md0
        raid-level                      0
        persistent-superblock           1
        chunk-size                      64

        nr-raid-disks                   12
        nr-spare-disks                  0

        device                          /dev/sdc1
        raid-disk                       0

        device                          /dev/sdd1
        raid-disk                       1

        device                          /dev/sde1
        raid-disk                       2

        device                          /dev/sdf1
        raid-disk                       3

        device                          /dev/sdg1
        raid-disk                       4

        device                          /dev/sdh1
        raid-disk                       5

        device                          /dev/sdi1
        raid-disk                       6

        device                          /dev/sdj1
        raid-disk                       7

        device                          /dev/sdk1
        raid-disk                       8

        device                          /dev/sdl1
        raid-disk                       9

        device                          /dev/sdm1
        raid-disk                       10

        device                          /dev/sdn1
        raid-disk                       11

raiddev /dev/md1
        raid-level                      1
        persistent-superblock           1
        chunk-size                      64

        nr-raid-disks                   2
        nr-spare-disks                  0

        device                          /dev/md0
        raid-disk                       0

        device                          /dev/sdb1
        raid-disk                       1





[Index of Archives]         [Linux RAID]     [Kernel Development]     [Red Hat Install]     [Video 4 Linux]     [Postgresql]     [Fedora]     [Gimp]     [Yosemite News]

  Powered by Linux