Transition to CentOS - RAID HELP!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



     Hi Folks,

     I've inherited an old RH7 system that I'd like to upgrade to 
CentOS6.1 by means of wiping it clean and doing a fresh install.  
However, the system has a software raid setup that I wish to keep 
untouched as it has data on that I must keep.  Or at the very least, TRY 
to keep.  If all else fails, then so be it and I'll just recreate the 
thing.  I do plan on backing up the data first in case of disasters.  
But I'm hoping I don't have to considering there's some 500GiB on it.

     The previous owner sent me a breakdown of how they build the raid 
when it was first done.  I've included an explanation below this message 
with the various command outputs.  Apparently their reason for doing it 
the way they did was so they can easily add drives to the raid and grow 
everything equally.  It just seems a bit convoluted to me.

     Here's my problem: I have no idea what the necessary steps are to 
recreate it, as in, in what order.  I presume it's pretty much the way 
they explained it to me:
     - create partitions
     - use mdadm to create the various md volumes
     - use pvcreate to create the various physical volumes
     - use lvcreate to create the two logical volumes

     If that's the case, great.  However, can I perform a complete 
system wipe, install CentOS 6.1, and re-attach the raid and mount the 
logical volumes without much trouble?

     What follows is the current setup, or at least, the way it was 
originally configured.  The system has 5 drives in it:

     sda = main OS drive  (80 GiB)
     sdb, sdc, sdd, and sde: raid drives, 500 GiB each.

     The setup for the raid as I've been explained was done something 
like this:

     First the four drives were each partitioned into 10 equal size 
partitions.  fdisk shows me this:

fdisk -l /dev/sdb

Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        6080    48837568+  83  Linux
/dev/sdb2            6081       12160    48837600   83  Linux
/dev/sdb3           12161       18240    48837600   83  Linux
/dev/sdb4           18241       60801   341871232+   5  Extended
/dev/sdb5           18241       24320    48837568+  83  Linux
/dev/sdb6           24321       30400    48837568+  83  Linux
/dev/sdb7           30401       36480    48837568+  83  Linux
/dev/sdb8           36481       42560    48837568+  83  Linux
/dev/sdb9           42561       48640    48837568+  83  Linux
/dev/sdb10          48641       54720    48837568+  83  Linux
/dev/sdb11          54721       60800    48837568+  83  Linux

     Then they took each partition on one drive and linked it with the 
same partition on the other drive.  So when I look at mdadm for each 
/dev/md[0-9] device, I see this:

mdadm --detail /dev/md0
/dev/md0:
         Version : 00.90.03
   Creation Time : Wed Aug 29 07:01:34 2007
      Raid Level : raid5
      Array Size : 146512128 (139.72 GiB 150.03 GB)
   Used Dev Size : 48837376 (46.57 GiB 50.01 GB)
    Raid Devices : 4
   Total Devices : 4
Preferred Minor : 0
     Persistence : Superblock is persistent

     Update Time : Tue Jan 17 13:49:49 2012
           State : clean
  Active Devices : 4
Working Devices : 4
  Failed Devices : 0
   Spare Devices : 0

          Layout : left-symmetric
      Chunk Size : 256K

            UUID : 43d48349:b58e26df:bb06081a:68db4903
          Events : 0.4

     Number   Major   Minor   RaidDevice State
        0       8       17        0      active sync   /dev/sdb1
        1       8       33        1      active sync   /dev/sdc1
        2       8       49        2      active sync   /dev/sdd1
        3       8       65        3      active sync   /dev/sde1

     ... and pvscan says:

pvscan
   PV /dev/md0   VG VolGroup00   lvm2 [139.72 GB / 0    free]
   PV /dev/md1   VG VolGroup00   lvm2 [139.72 GB / 0    free]
   PV /dev/md2   VG VolGroup00   lvm2 [139.72 GB / 0    free]
   PV /dev/md3   VG VolGroup00   lvm2 [139.72 GB / 0    free]
   PV /dev/md4   VG VolGroup00   lvm2 [139.72 GB / 0    free]
   PV /dev/md5   VG VolGroup00   lvm2 [139.72 GB / 0    free]
   PV /dev/md6   VG VolGroup00   lvm2 [139.72 GB / 0    free]
   PV /dev/md7   VG VolGroup00   lvm2 [139.72 GB / 0    free]
   PV /dev/md8   VG VolGroup00   lvm2 [139.72 GB / 0    free]
   PV /dev/md9   VG VolGroup00   lvm2 [139.72 GB / 139.72 GB free]
   Total: 10 [1.36 TB] / in use: 10 [1.36 TB] / in no VG: 0 [0   ]

     (evidently /dev/md9 isn't being used ... emergency spare?)
     And from there, they created the logical volumes which lvscan says are:

lvscan
   ACTIVE            '/dev/VolGroup00/LogVol00' [1.09 TB] inherit
   ACTIVE            '/dev/VolGroup00/LogVol01' [139.72 GB] inherit
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos


[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux