This is a 2-Disk Linux software RAID1 with 2 7200RPM IDE Drives, 1 PATA
and 1 SATA :
apollo13 ~ # hdparm -t /dev/md0
/dev/md0:
Timing buffered disk reads: 156 MB in 3.02 seconds = 51.58 MB/sec
apollo13 ~ # hdparm -t /dev/md0
/dev/md0:
Timing buffered disk reads: 168 MB in 3.06 seconds = 54.87 MB/sec
This is a 5-Disk Linux software RAID5 with 4 7200RPM IDE Drives and 1
5400RPM, 3 SATA and 2 PATA:
apollo13 ~ # hdparm -t /dev/md2
/dev/md2:
Timing buffered disk reads: 348 MB in 3.17 seconds = 109.66 MB/sec
apollo13 ~ # hdparm -t /dev/md2
/dev/md2:
Timing buffered disk reads: 424 MB in 3.00 seconds = 141.21 MB/sec
apollo13 ~ # hdparm -t /dev/md2
/dev/md2:
Timing buffered disk reads: 426 MB in 3.00 seconds = 141.88 MB/sec
apollo13 ~ # hdparm -t /dev/md2
/dev/md2:
Timing buffered disk reads: 426 MB in 3.01 seconds = 141.64 MB/sec
The machine is a desktop Athlon 64 3000+, buggy nforce3 chipset, 1G
DDR400, Gentoo Linux 2.6.15-ck4 running in 64 bit mode.
The bottleneck is the PCI bus.
Expensive SCSI hardware RAID cards with expensive 10Krpm harddisks should
not get humiliated by such a simple (and cheap) setup. (I'm referring to
the 12-drive RAID10 mentioned before, not the other one which was a simple
2-disk mirror). Toms hardware benchmarked some hardware RAIDs and got
humongous transfer rates... hm ?