>Unless I'm missing something, the only FC or SCSI HDs of ~147GB capacity are 15K, not 10K. In the spec we got from HP, they are listed as model 286716-B22 (http://www.dealtime.com/xPF-Compaq_HP_146_8_GB_286716_B22) which seems to run at 10K. Don't know how old those are, but that's what we got from HP anyway. >15Krpm HDs will have average access times of 5-6ms. 10Krpm ones of 7-8ms. Average seek time for that disk is listed as 4.9ms, maybe sounds a bit optimistic? > 28HDs as above setup as 2 RAID 10's => ~75MBps*5= ~375MB/s, ~75*9= ~675MB/s. I guess it's still limited by the 2Gbit FC (192Mb/s), right? >Very, very few RAID controllers can do >= 1GBps One thing that help greatly with bursty IO patterns is to up your battery backed RAID cache as high as you possibly can. Even multiple GBs of BBC can be worth it. Another reason to have multiple controllers ;-) I use 90% of the raid cache for writes, don't think I could go higher than that. Too bad the emulex only has 256Mb though :/ >Then there is the question of the BW of the bus that the controller is plugged into. >~800MB/s is the RW max to be gotten from a 64b 133MHz PCI-X channel. >PCI-E channels are usually good for 1/10 their rated speed in bps as Bps. >So a PCI-Ex4 10Gbps bus can be counted on for 1GBps, PCI-Ex8 for 2GBps, etc. >At present I know of no RAID controllers that can singlely saturate a PCI-Ex4 or greater bus. The controller is a FC2143 (http://h71016.www7.hp.com/dstore/MiddleFrame.asp?page=config&ProductLineId=450&FamilyId=1449&BaseId=17621&oi=E9CED&BEID=19701&SBLID=), which uses PCI-E. Don't know how it compares to other controllers, haven't had the time to search for / read any reviews yet. >>Now to the interesting part: would it make sense to use different >>stripe sizes on the separate disk arrays? >> >The short answer is Yes. Ok >WAL's are basically appends that are written in bursts of your chosen log chunk size and that are almost never read afterwards. Big DB pages and big RAID stripes makes sense for WALs. According to http://www.pcguide.com/ref/hdd/perf/raid/concepts/perfStripe-c.html, it seems to be the other way around? ("As stripe size is decreased, files are broken into smaller and smaller pieces. This increases the number of drives that an average file will use to hold all the blocks containing the data of that file, theoretically increasing transfer performance, but decreasing positioning performance.") I guess I'll have to find out which theory that holds by good ol´ trial and error... :) - Mikael