Re: Has anyone compared SWRAID for (JBOD@HW-RAID) vs. (Regular Sata PCI-e cards)?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Justin Piszcz wrote:
.. What I meant was is JBOD using a single card with 16 ports faster than
using JBOD with multiple PCI-e SATA cards?

JBOD on a HW RAID card is really wasting it's primary purpose, offloading RAID processing from the CPU, and consolidating large transactions.

Using HW RAID-1 means that, for example, _one_ copy of a 4k write to a RAID-1 device goes the card, which performs data replication to each device. In SW RAID's case, $N copies cross the PCI bus, one copy for each device in the RAID-1 mirror.

In HW RAID-5, one 4k write can go to the card, which then performs the parity calculation and data replication. In SW RAID-5, the parity calculation occurs on the host CPU, and $N copies go across the PCI bus.

Running HW RAID in JBOD mode eliminates all the efficiencies of HW RAID just listed. You might as well run normal SATA at that point, because you gain additional flexibility and faster performance.

But unless you are maxing out PCI bus bandwidth -- highly unlikely for PCI Express unless you have 16 SSDs or so -- you likely won't even notice SW RAID's additional PCI bus use.


And of course, there are plenty of other factors to consider. I wrote a bit on this topic at http://linux.yyz.us/why-software-raid.html

	Jeff


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux