Re: 4 X 500 gb drives - best software raid config for a backup server?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Sat, 2009-02-21 at 08:40 +0800, Chan Chung Hang Christopher wrote:
> Ian Forde wrote:
> > I'd have to say no on the processing power for RAID 5.  Moore's law has
> > grown CPU capabilities over the last 15 or so years.  HW RAID
> > controllers haven't gotten that much faster because they haven't needed
> > to.  It's faster to do it in software, though it's preferable to offload
> > it to HW RAID so that any apps aren't affected directly.
> >   
> You will have to prove that. I have previously posted posts with links 
> to benchmarks that show that hardware raid with sufficient processing 
> power beat the pants of software raid when it comes to raid5/6 
> implementations. Hardware raid cards no longer come with crappy i960 cpus.

Just by doing some quick googling, I came across:

http://blogs.zdnet.com/storage/?p=126
http://storagemojo.com/2007/04/24/mo-better-zfs-performance-stats/
http://milek.blogspot.com/2007/04/hw-raid-vs-zfs-software-raid-part-iii.html

Now, bear in mind that I'm no ZFS fanboy, but I'm saying that it's not
so cut and dry anymore. The equation changes, of course, when we're
talking about a purposed fileserver versus an application server that
needs RAID.  (The app server can suffer because its losing access to CPU
resources.)  But the point of contention is still there.  Both are
viable solutions, when considering that SW RAID was never a serious
contender for performance over the years, look at where it is now.  This
tells me that it's trending up towards equaling or bettering HW RAID
performance.  And that's not talking about price points.  When throwing
that in...

But again - I still like HW RAID.  I think we're in agreement on this.

> > I would agree on that cache memory is an advantage, especially when
> > considering battery-backed cache memory.  
> There is more to it. That cache memory also cuts down on bus traffic but 
> the real kicker is that there is no bus contention between the board's 
> cpu and disk data whereas software raid needs to read of the disks for 
> its calculations and therefore suffers latencies that hardware raid 
> boards (which have direct connections to disks) do not. Of course, if 
> the cache size is insufficient, then the hardware raid board will not 
> perform much better if not worse than software raid.

Indeed.

> > But those aren't the only significant areas.  HW RAID allows for
> > hot-swap and pain-free (meaning zero commands needed) disk replacement.
> >   
> 
> Hmm...really? I guess it depends on the board. (okay, okay, thinking of 
> antique 3ware 750x series may not be fair)

I was thinking about when I was running a farm of 500 HP DL-x80 series
boxes and disk replacement became a 9x5 job that we farmed out.  Just
give a list of servers and locations (first drive or second drive) and
the person could pull old drives out, put new drives in, and resync was
automatic.  Same thing is true for Dell PERC hardware.  I note that
that's not necessarily true with ALL HW RAID controllers, as they have
to support hot-swap, and the chassis has to have hot-swap slots. But
still, I've only seen one SW RAID implementation that does auto-sync.
That's the Infrant ReadyNAS (http://www.readynas.com).  I wonder how
they did it?  Might not be a bad idea to see how they're able to use
mdadm to detect and autosync drives.  I don't *ever* want to go through
something like:

http://kev.coolcavemen.com/2008/07/heroic-journey-to-raid-5-data-recovery/

Not when a little planning can help me skip it... ;)

	-I

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux