Re: Anaconda doesn't support raid10

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



Am Dienstag, 8. Mai 2007 schrieb Ruslan Sivak:
> Andreas Micklei wrote:
> > Am Montag, 7. Mai 2007 schrieb Ruslan Sivak:
> >> I've just installed the system as follows
> >>
> >> Raid1 for /boot with 2 spares (200mb)
> >> raid0 for swap  (1GB)
> >> raid6 for / (10GB)
> >
> > NEVER EVER use raid0 for swap if you want reliability. If one drive fails
> > the virtual memory gets corrupted and the machine will crash horribly
> > (tm). Besides creating sepearte swap partitions on different physical
> > discs will give you the same kind of performance, so using striping on a
> > swap parition is kind useless for gaining performance.
> >
> > I suggest using raid-1 or raid-6 for swap, so the machine can stay up if
> > one drive fails.
>
> Interesting thing... I build the following set up:
>
> /boot on raid1
> swap on raid0
> / on raid6
> /data on 2 lvm raid1's.

Again:

http://tldp.org/HOWTO/Software-RAID-HOWTO-2.html#ss2.3

> I shut down and plucked out one of the drives (3rd one I believe).
> Booted back up, everything was fine.  Even swap (I think).  I, rebooted,
> put in the old drive, hot added the partitions and everything rebuilt
> beautifully.  (again not sure about swap).

Swap probably was not used at this time, or else your machine would have 
crashed. RAID-0 does not degrade when you plug out one disc, it simply fails. 
So the effect when swap is in use is the same as a RAM module going bad.

> I decided to run one more test.  I plucked out the first (boot) drive.
> Upon reboot, I got greeted by GRUB all over the screen.  Upon booting
> into rescue mode, it couldn't find any partitions.  I was able to mount
> boot, and it let me recreate the raid1 partitions, but no luck with
> raid6.  This is the second time that this has happened.  Am I doing'
> something wrong?  Seems when I pluck out the first drive, the drive
> letters shift (since sda is missing, sdb becomes sda, sdc becomes sdb
> and sdd becomes sdc).
>
> What's the proper repair method for a raid6 in this case?  Or should I
> just avoid raid6, and put / on 2 an LVM of 2 raid1's?  Any way to set up
> interleaving  (although testing raid1 vs raid10 with hdparm -t gives
> only marginal performance improvement).

I haven't played with software RAID-6 and only use software RAID-5 on one 
machine currently (RAID-1 for boot). I am also not very familar with LVM, so 
I can't be of much help i fear. However, I find the Linux Software RAID HOWTO 
a very valuable resource, although it is a few years old:

http://tldp.org/HOWTO/Software-RAID-HOWTO.html

regards,
Andreas Micklei

-- 
Andreas Micklei
IVISTAR Kommunikationssysteme AG
Ehrenbergstr. 19 / 10245 Berlin, Germany
http://www.ivistar.de

Handelsregister: Berlin Charlottenburg HRB 75173
Umsatzsteuer-ID: DE207795030
Vorstand: Dr.-Ing. Dirk Elias
Aufsichtsratsvorsitz: Dipl.-Betriebsw. Frank Bindel

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux