Re: Raid Card controller for FC System

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Todd Denniston wrote:
Roger Heflin wrote, On 03/24/2008 02:20 PM:
edwardspl@xxxxxxxxxx wrote:
Alan Cox wrote:

On Sun, 23 Mar 2008 15:39:22 +0800
edwardspl@xxxxxxxxxx wrote:

Dear All,

Which model / type of ATA Raid card controller is good for work with New FC System ?
Would you  please recommend ?
Almost every 'raid' controller for ATA devices is just driver level raid,
so equivalent to using the built in lvm/md raid support that works with
any devices. At the high end there are a few hardware raid cards but they
rarely outperform ordinary ATA on PCI Express.



Edward,

The cheapest 4-port raid cards are typically $300US, the 8-port cards are quite a bit more. If you are a home user I would suggest not wasting your money on the HW raid, and has others mentioned it is not really worth the extra money for a home user, so use software raid.

Most of the cheaper cards are fakeraid and at best (if supported under DMRAID) are only slightly better than software raid.

                             Roger


So would the better question be:
Which model / type of ATA multi-port card controller is good when you want to do software RAID with New Fedora System? i.e. which manufactures cards that you can hang 4+ drives off of, have enough independence[1] between drives, that doing software RAID works fast[2]?
Can you get 4+ port SATA cards that don't claim to be "RAID" cards?

Yes. The problem is if you have a PCI (not -E or -X) the bus is limited to about 130 MB/second for all cards, so for a fast machine you need either -E or -X and multi-port cards of those get expensive too.

I have a crappy (but cheap) 4-port SATA SIL card, it works, it is PCI and not the fastest but it is cheap, and appears to be reliable, it only does about 60MB/second writes and 90MB/second reads with 4 disks under RAID5.

If you need speed and more than 4 ports, the cost goes up quite a bit.

And you have to test them, I have seen cards that each 2 sata ports share hardware, so using 2 disks on ports 1 and 3 is faster than using 2 disks on ports 1 and 2, and of course all of the ports share the bus the card is plugged into, so it is critical to test things as none of the specifications will actually tell you any of this.

A lot of the highpoint fakeraid cards are based on the Marvell sata chipset(s) and perform pretty good, but you have to actually confirm that the given version is truly supported under Linux without their extra driver, and they are reasonably cheap. I have not tested the recent multiport Adaptec controllers so I don't know what they will do.

On motherboards you have a similar set of issues, depending on where the given motherboard connects in the Sata controllers depends on how much the total throughput could be. Some put the sata controllers on the PCI (non-X non-E) bus part which makes them truly suck, and some put things closer to the cpu and on higher bandwidth parts of the mb/chipset, you need to look at the MB's block diagram to figure out exactly what is shared with what to determine the best config to get the most speed out of things, and what other components on the MB can affect you (network..)


Or has everything already been said here:
http://linuxmafia.com/faq/Hardware/sata.html
http://linux-ata.org/faq-sata-raid.html

[1] I am making the old assumption that ATA drives on the same bus slow each other down. Does that really matter with SATA?

Depends on the SATA card's chipset, generally with the newer ones it does not matter.


[2] assuming the controller card is more likely to be the bottleneck than the processor, PCI bus, or drives.


Unless you have a huge number of disks, and good bandwidth everywhere, the answer is one of the busses to the SATA card will probably be the bottleneck, I have had 3+ year old quad socket MB's with PCI-X sustain 360MB/second read or writes with the bottleneck in that case being the 2-2Gbps (about 440MB/second limit) fiber channel ports being used, in that case though I had several external raid units attached to the machines each of which was quite fast, and had I used more ports on separate PCI-X buses could have probably easily exceeded that rate, but the spec we were trying to meet was met by the 360MB/second so there was no need to go to more extreme measures.

It comes down to, you need to know how a given MB allocates its bandwidth, and what the limits are where things join together to even be able to guess before testing what is going to be the limit, and you need to know what you actually need for a given application.

                                  Roger

--
fedora-list mailing list
fedora-list@xxxxxxxxxx
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [Fedora Magazine]     [Fedora News]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Maintainers]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Legacy]     [Fedora Desktop]     [Fedora Fonts]     [ATA RAID]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [SSH]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Centos]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Tux]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Asterisk PBX]     [Fedora Sparc]     [Fedora Universal Network Connector]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux