Andreas Micklei wrote:
Am Montag, 7. Mai 2007 schrieb Ruslan Sivak:
I've just installed the system as follows
Raid1 for /boot with 2 spares (200mb)
raid0 for swap (1GB)
raid6 for / (10GB)
NEVER EVER use raid0 for swap if you want reliability. If one drive fails the
virtual memory gets corrupted and the machine will crash horribly (tm).
Besides creating sepearte swap partitions on different physical discs will
give you the same kind of performance, so using striping on a swap parition
is kind useless for gaining performance.
I suggest using raid-1 or raid-6 for swap, so the machine can stay up if one
drive fails.
Interesting thing... I build the following set up:
/boot on raid1
swap on raid0
/ on raid6
/data on 2 lvm raid1's.
I shut down and plucked out one of the drives (3rd one I believe).
Booted back up, everything was fine. Even swap (I think). I, rebooted,
put in the old drive, hot added the partitions and everything rebuilt
beautifully. (again not sure about swap).
I decided to run one more test. I plucked out the first (boot) drive.
Upon reboot, I got greeted by GRUB all over the screen. Upon booting
into rescue mode, it couldn't find any partitions. I was able to mount
boot, and it let me recreate the raid1 partitions, but no luck with
raid6. This is the second time that this has happened. Am I doing'
something wrong? Seems when I pluck out the first drive, the drive
letters shift (since sda is missing, sdb becomes sda, sdc becomes sdb
and sdd becomes sdc).
What's the proper repair method for a raid6 in this case? Or should I
just avoid raid6, and put / on 2 an LVM of 2 raid1's? Any way to set up
interleaving (although testing raid1 vs raid10 with hdparm -t gives
only marginal performance improvement).
Russ
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos