Re: OT: What's wrong with RAID5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



[re-sending with the right email address, hoping the first
email won't make it through..]


Fajar Priyanto wrote:
> Hi all,
> Sorry for the OT.
> I've got an IBM N3300-A10 NAS. It runs Data Ontap 7.2.5.1.
> The problem is, from the docs it says that it only supports either RAID-DP
or RAID4.
> What I want to achieve is Max Storage Capacity, so I change it from
RAID-DP to RAID4, but with RAID4, the maximum disk in a RAID Group
decrease from 14 to 7. In the end, either using RAID-DP or RAID4, the
capacity is the same.

It's the array's way of saying it's not really safe to operate a
14 disk RAID-4 group in the event of a drive failure. Also performance will
suffer greatly with such a large data:parity drive ratio.

NetApp RAID DP (what you have):
http://www.netapp.com/us/library/white-papers/wp_3298.html

> What's wrong with RAID5, is there any technical limitation with RAID5?

Most onboard RAID controllers are absolute crap, RAID 5 isn't as
easy to support as RAID1/RAID0, as extra processing time and code
is needed for the parity stuff. I wouldn't use them for even RAID
1 though. If it's onboard, and I want RAID then I'll use software
RAID. Unless it's a SCSI/SAS RAID controller often times those
onboard RAID controllers are of decent quality.

RAID 5 as implemented by most disk vendors does have issues at least with
large SATA disks -

Why RAID 5 stops working in 2009 -
http://blogs.zdnet.com/storage/?p=162

A couple vendors that I know of - 3PAR(which I use) and Compellent don't
suffer from this limitation because they virtualize the disks, I can't speak
too much for Compellent's architecture but 3PAR doesn't RAID physical disks,
they raid portions of each disk, when a disk
fails, every disk in the system participates in rebuilding that
failed set of arrays, which results in upwards of a 10x improvement in
recovery time and 90%+ drop in system impact while the arrays
are rebuilding.  This virtualization also allows you to run
multiple levels of RAID on the same spindles for different purposes.

The likelihood of suffering a double disk failure on such a system during a
RAID rebuild is probably quite a bit less than suffering
a triple disk failure on a RAID-6 system which runs RAID on
whole physical disks. At full system load my array can recover from a 750GB
drive failure in less than 3 hours. Our previous array took nearly 30 hours
to recover from a 300GB(SATA) drive failure. If
on my array the disk is only half utilized, then it will take half the time
to rebuild, as only written data is rebuilt, no point in
replicating portions of the disk that haven't been used, though other
vendors do this because they don't have the architecture to support sub-disk
rebuilds.

My own 200-disk array has more than 86,000 individual RAID arrays
on it, most of them being RAID 5(5+1). Array management is
automatic and transparent to the storage system I just tell it
how big a volume I want and what type of data protection.

For performance reasons I'm migrating my RAID 5(5+1) to 3+1 next
year after I add more space. I miscalculated the growth of the big driver of
space on the current system so I had to migrate to
5+1(RAID level migrations have no service impact either on my
array). On my system there is roughly only a 9% performance drop
from RAID 1+0 to RAID 5+0, so in many cases running RAID 5 on my
array is about the same as RAID 1 on other arrays, it's that fast.

RAID 6 has a pretty good performance hit vs RAID 5 due to the
extra parity disk, on some arrays the performance hit is even
greater as the array calculates parity twice, NetApp I think has
as good a RAID 6 implementation as there is, though they can't get around
writing the parity information to two different disks.

nate




_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux