chrism@xxxxxxxxx wrote:
Joshua Baker-LePain wrote:
You know, the whole "disk is cheap, so why use RAID5?" argument just
doesn't wash with me. Sure, disk *is* cheap. But some of us need
every GB we can get for our money (well, given I'm spending grant
money, it's actually *your* money too (if you live in the US)).
To demonstrate, let's look at a 24 drive system (3ware has a 24 port
9650 board). Newegg has 500GB WD RE2 drives for $160. So for $3840
in drives I can get:
a) 6TB RAID10 => $0.64/GB
or
b) 10.5TB RAID6 w/ hot spare => $0.37/GB
Umm, I'll take 75% more space for the same money, TYVM.
did those prices factor in the drive bay infrastructure for 24 drives
with cabling, redundant power supplies, etc?
c) 12TB RAID0 w/no redundancy => $0.32/GB
When my scratch data increases in importance, I'll have to investigate
that new fangled RAID 6 thang. :) Does RAID6 suffer from this
performance degradation bogey man when used with ext3? Isn't RAID6
just RAID5 with a redundant parity stripe across the drives?
btw, I would NOT build a 20-something raid5/6 set. the rebuild times
would be massively slow, opening a large window for double drive
failure. Before you say 'nah, would never happen', check out
phpbb.com, they lost their web server and forums to a double failure
last month, and yes, they had a hotspare so the rebuild started
immediately.
The large SAN vendors usually don't recommend building raid5 sets larger
than 6-8 disks, and will stripe or concatenate multiple of those on the
typical SAN with 100s of spindles. Myself, I'll stick with RAID10 for
anything critical.
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos