Re: Software raid - controller options

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You've probably missed a discussion on issues I've been having with
SATA, software RAID and bad drivers. A clear thing from the responses 
I got is that you really need to use a recent kernel, as they may have
fixed those problems.

I didn't get clear responses indicating specific cards that are 
known to work well when hardrives fail. But if you can deal with
a server crashing and then rebooting manually then software RAID
is the way to go. I've always been able to get the servers back
online even with the problematic drivers.

I am happy with the 3ware cards and do use their hardware RAID to
avoid the problems that I've had. With those I've fully tested
16 drive systems with 2 arrays using 2 8-port cards. Others have
recommended the Areca line.

As for cheap "dumb" interfaces I am now using the RocketRAID 2220,
which gives you 8 ports on a PCI-X. I believe the "built" in RAID
on those is just firmware based so you may as well use them to
show the drives in normal/legacy mode and use software RAID on
top. Keep in mind I haven't fully tested this solution nor have
tested for proper functioning when a drive fails.

Another inexpensive card I've used with good results is the Q-stor
PCI-X card, but I think this is now obsolete.

Hope this helps,

Alberto


On Tue, 2007-11-06 at 05:20 +0300, Lyle Schlueter wrote:
> Hello,
> 
> I just started looking into software raid with linux a few weeks ago. I
> am outgrowing the commercial NAS product that I bought a while back.
> I've been learning as much as I can, suscribing to this mailing list,
> reading man pages, experimenting with loopback devices setting up and
> expanding test arrays. 
> 
> I have a few questions now that I'm sure someone here will be able to
> enlighten me about.
> First, I want to run a 12 drive raid 6, honestly, would I be better of
> going with true hardware raid like the areca ARC-1231ML vs software
> raid? I would prefer software raid just for the sheer cost savings. But
> what kind of processing power would it take to match or exceed a mid to
> high-level hardware controller?
> 
> I haven't seen much, if any, discussion of this, but how many drives are
> people putting into software arrays? And how are you going about it?
> Motherboards seem to max out around 6-8 SATA ports. Do you just add SATA
> controllers? Looking around on newegg (and some googling) 2-port SATA
> controllers are pretty easy to find, but once you get to 4 ports the
> cards all seem to include some sort of built in *raid* functionality.
> Are there any 4+ port PCI-e SATA controllers cards? 
> 
> Are there any specific chipsets/brands of motherboards or controller
> cards that you software raid veterans prefer?
> 
> Thank you for your time and any info you are able to give me!
> 
> Lyle
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
-- 
Alberto Alonso                        Global Gate Systems LLC.
(512) 351-7233                        http://www.ggsys.net
Hardware, consulting, sysadmin, monitoring and remote backups

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux