Serial ATA hardware raid.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



From:  Pasi Pirhonen 
> This is as good insertion point as anything else,
> for what i am going
to say.
> 3ware is working like a champ, but slowly.
> The tuning won't make it
magically go over 100MB/s sustained writes.

I am bumping up against 100MBps on my bonnie write benchmarks on an older 6410.
Now that's in RAID-0+1, and not RAID-5.

> Random I/O sucks (what i've
seen) for any SATA-setup.

It depends.
"Raw" ATA sucks for multiple operations because it has no I/O queuing.
AHCI is trying to address that, but it's still unintelligent.
3Ware queues reads/writes very well, and sequences them as best as it can.
But it's still not perfect.

> Even for /dev/mdX.

Now with MD, you're starting to taxi your interconnect on writes.
E.g., with Microcontroller or ASIC RAID, you only push the data you write.
With software (including "FRAID"), you push 2x for RAID-1.
That's 2x through your memory, over the system interconnect into the I/O and out the PCI bus.

When you talk RAID-3/4/5 writes, you slaughter the interconnect.
The bottleneck isn't the CPU.
It's the fact that for each stripe, 
you've gotta load from memory through the CPU back to memory - all over the system interconnect, before even looking at I/O.
For 4+ modern ATA disks, your talking a roundtrip that costs you an aggregate percentage of your system interconnect time beyond 30%+.

On a dynamic web or other CPU computational intensive server, it matters little.
The XOR operations actually use very little CPU power.
And the web or computational streams aren't saturating the interconnect.
But when you are doing file server I/O, and the system interconnect is used for raw bursts of network I/O as much as storage, it kills.

> Puny oldish A1000 can beat
those with almoust factor of ten for random I/O,
but being limited to
> max. 40MB/s transfers by it's interface (UW/HVD).

Or more like the i960 because, after all, RAID should stripe some operations across multiple channels.

> But what i am going to say is that for my centos devel work
> (as in my NFS-server), i just recently moved my 1.6TB raid under /dev/md5 with
> HighPoint RocketRaid1820.
> I don't care that NOT being hardware RAID.
> The /dev/mdX beats the 3ware 9500S-8 formerly used hands down when you
> do have 'spare CPU cycles to let kernel handle the parity operations'.

It has nothing to do with CPU cycles but interconnect.
XOR puts no strain on modern CPUs, it's the added data streams being feed from memory to CPU.
Furthermore, using async I/O, MD can actually be _faster_ than hardware RAID.
Volume management in an OS will typically do much better than a hardware RAID card when it comes to block writes.

Of course the 9500S is still maturing.
Which is why I still prefer to use 4 and 8 channel 7506/8506 cards with RAID-0+1.
Even the AccelATA and 5000 left much to be desired before the 6000 and  latter 7000/8000 series.


[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux