Turning root partition into a RAID array **THE BACKGROUND**

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



Nigel kendrick wrote:

>Hope that's all clear. If anyone wants to comment on what I did I'd be happy
>to hear what you think and perhaps what I might have done differently!
>
>  
>

Sounds like you did what you had to in order to get your client up and 
running again while under a bit of time pressure.  Often times, we wind 
up with a suboptimal solution when faced with those circumstances, but 
doing things "the right way" isn't always the fastest way.  8-)

If it was me (and it isn't), I probably would have set up the 3 disks in 
a RAID5 array.  That way, you can lose one disk and the system still 
hums along until you can replace the failed disk.  The way you had 
things before, you could also lose one disk and continue humming 
along...just as long as it wasn't the system disk that failed.  8-)  If 
your client has the budget, and you've got the time, you might want to 
consider building a new box with a 3Ware hardware RAID card and 4 
disks.  You can run a 3-disk RAID5 with one hot spare.  So if you lose a 
disk in the array, it will grab the spare and automagically rebuild the 
array.  In theory, that allows you to lose 2 of your 4 drives (as long 
as the 2nd doesn't go away while the rebuild is happening) without the 
system crashing and burning.  Depending on what your space requirements 
are like, you could upgrade to 4 80gig SATA drives and something like a 
3Ware 8506-4LP or 9500S-4LP for about $600 over the cost of a new 
system.  CentOS 4.2 should install on a RAID5 array on either of these 
cards without any fuss.

Cheers,




[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux