Re: raid10 vs raid5 - strange performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>  After doing a little research, I see that the original slowest form of PCI
>  was 32 bit 33MHz, with a bandwidth of ~127MB/s.

That's still the prevalent form, for anything else you need an (older)
server or workstation board. 133MB/s in theory, 80-100MB/s in
practice.

>  The most common hardware used the v2.1 spec, which was 64 bit at 66MHz.

I don't think the spec version has anything to do with speed ratings, really.

>  I would expect operation at UDMA/66

What's UDMA66 got to do with anything?

>  Final thought, these drives are paired on the master slave of the same
>  cable, are they? That will cause them to really perform badly.

The cables are master-only, I'm pretty sure the controller doesn't
even do slaves.


To wrap it up
- on a regular 32bit/33Mhz PCI bus md-RAID10 is hurt really badly by
having to transfer data twice in every case.
- the old 3ware 7506-8 doesn't accelerate RAID-10 in any way, even
though it's a hardware RAID controller, possibly because it's more of
an afterthought.


On the 1+0 vs RAID10 debate ... 1+0 = 10 is usually used to mean a
stripe of mirrors, while 0+1 = 01 is a less optimal mirror of stripes.
The md implementation doesn't really do a stacked raid but with n2
layout the data distribution should be identical to 1+0 / 10.

C.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux