Guys,
I thought I would drop a line and see if anyone had done this recently. On a
server I had a 2 disk array of 750G drives. One failed and I ran with a degraded
array until the new pair of drives arrived. I installed the new drives and moved
the good disk from the failed array to /dev/sdc. I rebooted, marked /dev/sdc
(the sata disk in the BIOS) as bootable and deleted the old array and created a
new one in the BIOS (for the new drives) and the box booted right up on the old
drive.
Where I'm needing help is reconfiguring dmraid. Currently, the box is still
booting using the old dmraid designation for the degraded array even though I
deleted the array in the BIOS. (which is pretty amazing). Here is how the system
is currently running:
19:29 nirvana:~> df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
run 10M 152K 9.9M 2% /run
/dev/mapper/nvidia_ddddhhfhp5 23G 13G 9.4G 57% /
shm 1.9G 0 1.9G 0% /dev/shm
/dev/mapper/nvidia_ddddhhfhp10 608G 380G 198G 66% /home
/dev/mapper/nvidia_ddddhhfhp7 122M 34M 82M 30% /boot
/dev/mapper/nvidia_ddddhhfhp8 23G 6.0G 16G 28% /var
/dev/mapper/nvidia_ddddhhfhp9 33G 8.1G 23G 27% /srv
What I need to do is install Arch on the new array and then get rid of the
dmraid config on the old drive so it just runs as /dev/sdc. After I have the
base Arch install on the new array, I can simply move the information from
/dev/sdc to the new array.
Is there any shortcut to install Arch since I have the old install currently
running? Or is it just easier to stick the install CD in and go through the base
install on the new disks and then copy the old /var/cache/pacman/pkg to the new
arrays to complete the package install with local packages?
I'm fairly sure the dmraid designation with the old drive will take care of
itself when I do the install on the new array, but I'm not certain.
Any pointers or hints? Thanks
--
David C. Rankin, J.D.,P.E.