Fiddling with software RAID1 : continue working with one of two disks failing?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



Hi,

I'm currently experimenting with software RAID1 on a spare PC with two 
40 GB hard disks. Normally, on a desktop PC with only one hard disk, I 
have a very simple partitioning scheme like this :

/dev/hda1  80 MB    /boot   ext2
/dev/hda2   1 GB    swap
/dev/hda3  39 GB    /       ext3

Here's what I'd like to do. Partition a second hard disk (say, /dev/hdb) 
with three partitions. Setup RAID1 like this :

/dev/md0   80 MB    /boot   ext2
/dev/md1    1 GB    swap
/dev/md2   39 GB    /       ext3

I somehow managed to get this far. Here's what I have :

[root@raymonde ~]# fdisk -l /dev/hda

Disk /dev/hda: 41.1 GB, 41110142976 bytes
255 heads, 63 sectors/track, 4998 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot   Start    End      Blocks   Id  System
/dev/hda1   *        1     11       88326   fd  Linux raid autodetect
/dev/hda2           12    134      987997+  fd  Linux raid autodetect
/dev/hda3          135   4998    39070080   fd  Linux raid autodetect

[root@raymonde ~]# fdisk -l /dev/hdb

Disk /dev/hdb: 41.1 GB, 41110142976 bytes
16 heads, 63 sectors/track, 79656 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes

    Device Boot   Start    End      Blocks   Id  System
/dev/hdb1   *        1    156       78592+  fd  Linux raid autodetect
/dev/hdb2          157   2095      977256   fd  Linux raid autodetect
/dev/hdb3         2096  79656    39090744   fd  Linux raid autodetect

During install, my /dev/md1 and /dev/md2 got somehow mixed up, which 
doesn't really matter :

[root@raymonde ~]# cat /etc/fstab
/dev/md1        /               ext3    defaults        1 1
/dev/md0        /boot           ext2    defaults        1 2
tmpfs           /dev/shm        tmpfs   defaults        0 0
devpts          /dev/pts        devpts  gid=5,mode=620  0 0
sysfs           /sys            sysfs   defaults        0 0
proc            /proc           proc    defaults        0 0
/dev/md2        swap            swap    defaults        0 0

I wasn't sure where to install GRUB, so I chose /dev/md0.

I was wondering if this setup theoretically enabled me to continue 
working with one disk failure. So I tried unplugging the power cord of 
one of my hard disks... which resulted in a "GRUB Disk Error" on boot.

Question : is there a way to still run the system with either of the two 
disks "damaged" (in this case : unplugged)? And if so, how would I have 
to go about it in my setup?

Cheers from the freezing South of France,

Niki





_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos


[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux