Ross S. W. Walker wrote:
-----Original Message-----
From: centos-bounces@xxxxxxxxxx
[mailto:centos-bounces@xxxxxxxxxx] On Behalf Of Ruslan Sivak
Sent: Monday, May 07, 2007 4:00 PM
To: CentOS mailing list
Subject: Re: Anaconda doesn't support raid10
Ross S. W. Walker wrote:
-----Original Message-----
From: centos-bounces@xxxxxxxxxx
[mailto:centos-bounces@xxxxxxxxxx] On Behalf Of Ruslan Sivak
Sent: Monday, May 07, 2007 12:53 PM
To: CentOS mailing list
Subject: Anaconda doesn't support raid10
So after troubleshooting this for about a week, I was
finally able to
create a raid 10 device by installing the system, copying the
md modules
onto a floppy, and loading the raid10 module during the install.
Now the problem is that I can't get it to show up in anaconda. It
detects the other arrays (raid0 and raid1) fine, but the
raid10 array
won't show up. Looking through the logs (Alt-F3), I see the
following
warning:
WARNING: raid level RAID10 not supported, skipping md10.
I'm starting to hate the installer more and more. Why won't
it let me
install on this device, even though it's working perfectly
from the
shell? Why am I the only one having this problem? Is nobody
out there
using md based raid10?
Most people install the OS on a 2 disk raid1, then create a separate
raid10 for data storage.
Anaconda was never designed to create RAID5/RAID10 during install.
-Ross
Whether or not it was designed to create a Raid5/raid10, it
allows the
creating of raid5 and raid6 during install. It doesn't,
however, allow
the use of raid10 even if it's created in the shell outside
of anaconda
(or if you have an old installation on a raid10).
I've just installed the system as follows
Raid1 for /boot with 2 spares (200mb)
raid0 for swap (1GB)
raid6 for / (10GB)
after installing, I was able to create a raid10 device and
successfully
mount and automount by using /etc/fstab
Now to test what happens when a drive fails. I pulled out the first
drive - Box refuses to boot. Going into rescue mode, I was able to
mount /boot, was not able to mount the swap drive (as to be
expected, as
it's a raid0), was also not able to mount the / for some
reason, which
is a little surprising.
I was able to mount the raid10 parition just fine.
Maybe I messed up somewhere along the line. I'll try again, but it's
disheartening to see that a raid6 array would die after one drive
failure, even if it was somehow my fault.
Also assuming that the raid5 array could be recovered, what
would I do
with the swap partition? Would I just recreate it from the
space in the
leftover drives and would that be all that I need to boot?
Ok, my bad raid5/6 can be created during install even if OS
can't boot from it.
I guess raid10 is the red headed stepchild of anaconda...
I suggest this:
/dev/md0 raid1, 128MB partition, all 4 drives, for /boot
/dev/md1 raid1, rest of drive space, first 2 drives, for lvm
/dev/md2 raid1, rest of drive space, second 2 drives, for lvm
lvm volgroup CentOS, comprised of /dev/md1 and /dev/md2
logical vol1, root, interleave 2, mount /, 16GB
logical vol2, swap, interleave 2, swapfs, 4GB
This will provide the same performance and fail-over as a raid10.
If you remove the first disk and boot make sure BIOS is set to boot
off of disk2!
-Ross
_____________________________________________________________________
I don't seem to be able to control the interleave through anaconda. Is
this something that can be done post install?
Also I'm not very comfortable using LVM yet. Just getting used to md.
Russ
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos