Re: raid 1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



I am running raid 1 on a centos 4.4. One of the harddisk (sda1) failed. How can i carry on running the server using only sda2?

Generate a grub floppy and use that to load the grub menu from the sdb (probably now sda) disk.

If you are really talking about sda1 and sda2, those are partitions on the same disk.

Is there a detail step by step howto? The raid 1 has no LVM. just md0, md1 and md2. md0 is /boot, md1 is swap and md2 is the storage. I had replace sba with a new disk. I tried to boot up and it says kernel panic. How am i going to reconstruct the raid and sync sdb to sda?

It might be easier to swap the old sdb into the sda position so you'll boot from it, but you should also be able to boot the install cd with

If swapped and booted, and got a kernel panic error.

'linux rescue' at the boot prompt, let it detect and mount your system (which will be the 'broken' raid devices with their single members),

If i use linux rescue, The 3 mds I created are gone. /cat /proc/mdstat says Personalitlies: [raid0] [raid1] [raid5] [raid6], no longer Personalities : [raid1]

Perhaps your raid wasn't really working the way you thought before. From the rescue boot, does fdisk show the 3 partitions on the old disk with type 'fd'? Can you mount the old /boot and / partitions somewhere by hand? You should be able to do this with the /dev/sda1 and /dev/sda3 device names if the md devices aren't detected at boot.

cat /proc/partitions still shows me the 3 partitions.

Does fdisk say that they are type 'fd'(raid autodetect)?

I actually copied /boot to the "replaced disk" and it is able to boot up, but without any filesystem, so i guess the boot is still intact. So do i need to mount /boot and /?

If you can get the original partitions to be detected as their md devices you should fdisk matching partitions on the replacement disk, then 'mdadm --add ...' to add them and they will automatically sync up.

mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1 mdadm --create --verbose /dev/md1 --level=1 --raid-devices=2 /dev/sda2 /dev/sdb2 mdadm --create --verbose /dev/md2 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3

If you already had raid devices on one of the disks you should not have had to --create them again. The original ones should have been detected and you should have been able to --add new matching partitions.

I created them as md(s) are not longer there.


After that i reboot and got the kernel panic again.

md: considering sdb1
md: adding sdb1
md: created md0
md: bind<sda1>
md: running: <sdb1><sda1>
raid1: raid set md0 active with 2 out of 2 mirrors
md: ... autorun DONE
md: autodetcting RAID arrays
md:mautorun ...
Creating root device
Mounting root filesystem
switching to new root
switchroot: mount failed: 22
umount /unitrd/dev failed: 2
Kernel panic

When you --create a new raid it will start to sync the mirrors. It may have done this the wrong direction, overwriting your old contents. Can you still do a rescue mode boot, mount /dev/sda3 (or sdb3 if the old drive is in the 2nd position) and see the contents?

I am unable to mount sda3.
# mount /dev/sda3 /mnt/part3
mount: Mounting /dev/sda3 on /mnt failed: Invalid argument
sdb is not longer detectable.



_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux