RAID 5 vs. RAID 10

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Mon, 2005-07-25 at 17:57 +0800, Andrew Vong wrote:
> Hi, I am looking into purchasing a new server. This server will be
> mission-critical. 

What application(s)?  That's the biggie.

> I have read and somewhat understood the theories behind RAIDs 0, 1, 5,
> 10 & JBOD. However, I would like to get some feedback from those who
> have experience in implementing and recovering from a HDD failure
> using RAID. 

On a 3Ware card (no spare):  
- Pull out bad drive
- Put in new drive
- Tell 3DM2 to rebuild array
- Done.

On a 3Ware card (spare):  
- Automagically rebuilds array from designated spare
- Pull out bad drive
- Put in new drive
- Tell 3DM2 about new spare
- Done.

> Hardware specs include:-
> Dual Xeon 3.2 GHz
> 2 GB RAM

What chipset?  What mainboard?  What is your I/O configuration?

> I would like to implement hardware RAID but am unsure as to which
> would be most suitable for my needs. Any advice is appreciated. 
> Option 1 - RAID 5 (3 hdd's) + 1 hot spare
> Option 2 - RAID 10 (4 hdd's) + 1 "cold" spare (in the shelf)




> Questions I have :-
> 1) When should RAID 5 be implemented? 

- When you want maximum storage capacity per disk
  (the more disks, the better)
- When you have largely read-only data
- When much of your read-only data is in a database
- When you have buffering RAID hardware

RAID-5 acts like RAID-0 during reads.
But during writes RAID-5 can get bogged down in commits and requires a
lot of buffer.

RAID-4 is better when you have large block/file writes/reads, and is
used by some vendors (e.g., NetApp filers, especially for NFS).

RAID-3 (and NetCell's "RAID-XL") is better for desktops.

> 2) When should RAID 10 be implemented?

- When you want maximum write performance
- When you have lots of independent reads
- When you have non-blocking RAID and disk hardware (ASIC+SRAM, ATA)

RAID-10 acts like two, independent RAID-0 volumes when reading.
During writes, it's much faster than RAID-5 in many cases, especially
for system, swap, etc...
For file and large data servers, RAID-10 kicks RAID-5's butt in most
applications.

For a public/external web server, where I/O is limited to well less than
LAN speeds, RAID-10 loses much of its advantage over RAID-5, because
disk access/throughput is far less important.

> 3) Is RAID 5 with a hot spare safer than RAID 10 with a "cold" spare?

Of course, because most hardware RAID cards will automatically rebuild
on the spare drive for you.

> 4) Is it possible to configure RAID 10 to have a hot spare? 

Of course!  Just get enough channels on your hardware RAID card to do
so.  E.g., get at least an 8-channel card, and use 6 drives for RAID-10,
leaving 2 channels for spares.

When cost is an issue, I typically do a "near-hot spare" where I get a
4-channel SATA card, an Enlight 5-bay case, and I have the "cold spare"
already in a can, so it's simply a matter of plugging it into a "hot"
bay.

> 5) Should one of the HDDs fail, a hot spare w kick-in immediately and
> begin rebuilding.

Yes.

> As I am planning to put in 300 GB HDDs,

For "Mission Critical" servers, I'd almost push you towards the 73GB,
10000rpm WD Raptor SATA drives.  They have are "enterprise class" and
roll of the same line as Hitachi's U320 drives, so their vibration and
other attributes are 3-8x better than typical, commodity drives.

Otherwise, I continue to be a big fan of Seagate for commodity drives,
with their 5-year warranties.  They can offer such because their new
crop of materials can take 60C operating environments for longer
durations (although they clearly don't recommend 24x7 operation).

> how long would this take on a RAID 5 vs. RAID 10? 

Depends on the RAID card.  RAID-1[0] is just a direct copy of a disk.
RAID-5 is actually writes less data, but reads far more than RAID-1[0]
-- X-1 number of disks, so it can take much longer.  If your RAID card
doesn't buffer RAID-5 well (e.g., 3Ware Escalade 7000/8000, basically
pre-9000 series), then it can take a long time.

> 6) Will there be a degradation in performance for users on the system
> (RAID 5 vs. RAID 10)? 

Yes.  The only time I haven't seen a card take a massive performance hit
during a RAID rebuild is on the NetCell products with their RAID-XL.
But it's only for desktops (definitely not a design for servers).

You want to minimize rebuild time, period.

> 7) What are the disadvantages of using RAID 5 vs. RAID 10? 

Write performance, especially on a direct, block device like [S]ATA.
Unless you are building a web server where all you'll be doing is
reading 99.9% of the time, I recommended highly against RAID-5.

And even when building a web server, I still recommend the "system"
drive be RAID-10.  E.g., with an 8-channel controller, consider:  

- 4-disc RAID-10 System
- 3-disc RAID-5 Data
- 1-disc Hot Spare (which can be used for _either_ ;-)

With a 12-channel controller, make the RAID-5 data volume 7-discs.

> Thanks in advance for answering my questions.


-- 
Bryan J. Smith                                     b.j.smith@xxxxxxxx 
--------------------------------------------------------------------- 
It is mathematically impossible for someone who makes more than you
to be anything but richer than you.  Any tax rate that penalizes them
will also penalize you similarly (to those below you, and then below
them).  Linear algebra, let alone differential calculus or even ele-
mentary concepts of limits, is mutually exclusive with US journalism.
So forget even attempting to explain how tax cuts work.  ;->



[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux