Re: Recovering from default FC6 install

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 2006-11-12 at 01:00 -0500, Bill Davidsen wrote:
> I tried something new on a test system, using the install partitioning 
> tools to partition the disk. I had three drives and went with RAID-1 for 
> boot, and RAID-5+LVM for the rest. After the install was complete I 
> noted that it was solid busy on the drives, and found that the base RAID 
> appears to have been created (a) with no superblock and (b) with no 
> bitmap. That last is an issue, as a test system it WILL be getting hung 
> and rebooted, and recovering the 1.5TB took hours.
> 
> Is there an easy way to recover this? The LVM dropped on it has a lot of 
> partitions, and there is a lot of data in them asfter several hours of 
> feeding with GigE, so I can't readily back up and recreate by hand.
> 
> Suggestions?

First, the Fedora installer *always* creates persistent arrays, so I'm
not sure what is making you say it didn't, but they should be
persistent.

So, assuming that they are persistent, just recreate the arrays in place
as version 1.0 superblocks with internal bitmap.  I did that exact thing
on my FC6 machine I was testing with (raid1, not raid5, but no biggie
there) and it worked fine.  The detailed list of instructions:

Reboot with a rescue CD, skip the finding of the installation, when you
are at a prompt, use mdadm to examine the raid superblocks so you get
all the pertinent data such as chunk size on the raid5 and ordering of
constituent drives in the raid5 right.  Then recreate the arrays as
version 1.0 superblocks with internal write intent bitmaps.  Then mount
the partitions manually, bind mount things like /dev and /proc into
wherever you mounted the root filesystem, edit the mdadm.conf on the
root filesystem and remove the ARRAY lines (the GUIDs will be wrong
now), use mdadm -Db or mdadm -Eb to get new ARRAY lines and append them
to the mdadm.conf (possibly altering the device names for the arrays,
and if you use -E remember to correct the printout of the GUID in the
array line, it's 10:8:8:6 instead of 8:8:8:8), patch mkinitrd with
something like the attached patch, patch /etc/rc.d/rc.sysinit with
something like the other attached patch (or leave this patch out but
manually add the correct auto= parameter to your ARRAY lines in the
mdadm.conf), chroot into the root filesystem, remake your initrd image,
fdisk the drives and switch the linux partition types from raid
autodetect to plain linux, reboot, and you are done.

-- 
Doug Ledford <dledford@xxxxxxxxxx>
              GPG KeyID: CFBFF194
              http://people.redhat.com/dledford

Infiniband specific RPMs available at
              http://people.redhat.com/dledford/Infiniband
--- /sbin/mkinitrd	2006-09-28 12:51:28.000000000 -0400
+++ mkinitrd	2006-11-12 10:28:31.000000000 -0500
@@ -1096,6 +1096,13 @@
     mknod $MNTIMAGE/dev/efirtc c 10 136
 fi
 
+if [ -n "$raiddevices" ]; then
+    inst /sbin/mdadm.static "$MNTIMAGE/bin/mdadm"
+    if [ -f /etc/mdadm.conf ]; then
+        cp $verbose /etc/mdadm.conf "$MNTIMAGE/etc/mdadm.conf"
+    fi
+fi
+
 # FIXME -- this can really go poorly with clvm or duplicate vg names.
 # nash should do lvm probing for us and write its own configs.
 if [ -n "$vg_list" ]; then
@@ -1234,8 +1241,7 @@
 
 if [ -n "$raiddevices" ]; then
     for dev in $raiddevices; do
-        cp -a /dev/${dev} $MNTIMAGE/dev
-        emit "raidautorun /dev/${dev}"
+        emit "mdadm -As --auto=yes /dev/${dev}"
     done
 fi
 
--- /etc/rc.d/rc.sysinit	2006-10-04 18:14:53.000000000 -0400
+++ rc.sysinit	2006-11-12 10:29:03.000000000 -0500
@@ -403,7 +403,7 @@
 update_boot_stage RCraid
 [ -x /sbin/nash ] && echo "raidautorun /dev/md0" | nash --quiet
 if [ -f /etc/mdadm.conf ]; then
-    /sbin/mdadm -A -s
+    /sbin/mdadm -A -s --auto=yes
 fi
 
 # Device mapper & related initialization

Attachment: signature.asc
Description: This is a digitally signed message part


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux