Linux Software RAID Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello everyone,
 
I recently posted to this mailing list when I had problem with my HPT 370A add-on controller to make it work w/4 Maxtor hard drives. Well I did not get it to work in the hpt's "hardware" raid mode (4x80GB Maxtors in RAID 0, would work with 2 disks but not with 4), but Linux software raid works just fine (/dev/md stuff)....I also have one other RAID 0 configuration on the same computer, which is another 2x80GB RAID 0 array attached to motherboard's build-in controller (hdc and hdd). Ever since the system has been rock-stable (40 days of uptime until the power outage) and also quite performing.
 
I have recently done some benchmarks just to get the ballpark idea of how fast it really is (its a server with 100mbps ethernet and when i d/l i get 11.5MB/sec through nfs on both arrays).
I dont have the problem with the add-on card array, which is quite fast (individual disks' hdparm -t /dev/hd[efgh] get about 35MB/sec, the array gets about 100MB/sec which i could attribute to PCI bandwidth limitation). However I recently tested my other 2 disks' performance and I got:  (md0 == hdc & hdd in RAID 0)
/dev/hdc:
 Timing buffered disk reads:  64 MB in  1.98 seconds = 32.32 MB/sec
/dev/hdd:
 Timing buffered disk reads:  64 MB in  1.97 seconds = 32.49 MB/sec
/dev/md0
 Timing buffered disk reads:  64 MB in  2.47 seconds = 25.91 MB/sec
 
Any ideas as to why this the array is evidently underperforming? I have already thought about the same ide channel conflicts, the stripe size not optimized for filesystem (64k on ext3)..would any of those qualify? If not what else could there be as the possible source of the problem?
 
Thanks for any help and suggestions.
 
MK
 

[Index of Archives]     [Linux RAID]     [Linux Device Mapper]     [Linux IDE]     [Linux SCSI]     [Kernel]     [Linux Books]     [Linux Admin]     [GFS]     [RPM]     [Yosemite Campgrounds]     [AMD 64]

  Powered by Linux