Ruslan Sivak wrote:
Interesting thing... I build the following set up:
/boot on raid1
swap on raid0
Swap on raid1 has a chance of working through a drive failure. Raid0
doesn't.
/ on raid6
Does the installer do that?
/data on 2 lvm raid1's.
If you are going to use LVM you don't have to match your partitions
across all 4 drives. Put /boot, swap, / on raid1 on the 1st 2 drives
with another raid1 for the rest of the space. Then make a raid1 using
partitions that fill your 3rd and 4th drive and combine the two large
raid1's in LVM. That leaves it so you can expand if you want to add
more drives.
I shut down and plucked out one of the drives (3rd one I believe).
Booted back up, everything was fine. Even swap (I think). I, rebooted,
put in the old drive, hot added the partitions and everything rebuilt
beautifully. (again not sure about swap).
I decided to run one more test. I plucked out the first (boot) drive.
Upon reboot, I got greeted by GRUB all over the screen. Upon booting
into rescue mode, it couldn't find any partitions. I was able to mount
boot, and it let me recreate the raid1 partitions, but no luck with
raid6. This is the second time that this has happened. Am I doing'
something wrong? Seems when I pluck out the first drive, the drive
letters shift (since sda is missing, sdb becomes sda, sdc becomes sdb
and sdd becomes sdc).
The only thing that should care about about this is grub. Everything
else should autodetect.
What's the proper repair method for a raid6 in this case? Or should I
just avoid raid6, and put / on 2 an LVM of 2 raid1's?
I'd put / on one raid1 with no LVM. And personally, I'd do the same
with the rest of the space and deal with the extra partition by mounting
it somewhere. LVM avoids the need for that, but at the expense of no
longer being able to recover data from any single drive.
--
Les Mikesell
lesmikesell@xxxxxxxxx
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos