Your numbers seem quite ok considering the number of disks. We also get a 256Mb battery backed cache module with it, so I'm looking forward to testing the write performance (first using ext3, then xfs). If I get the enough time to test it, I'll test both raid 0+1 and raid 5 configurations although I trust raid 0+1 more. And no, it's not the cheapest way to get storage - but it's only half as expensive as the other option: an EVA4000, which we're gonna have to go for if we(they) decide to stay in bed with a proprietary database. With postgres we don't need replication on SAN level (using slony) so the MSA 1500 would be sufficient, and that's a good thing (price wise) as we're gonna need two. OTOH, the EVA4000 will not give us mirroring so either way, we're gonna need two of whatever system we go for. Just hoping the MSA 1500 is reliable as well... Support will hopefully not be a problem for us as we have a local company providing support, they're also the ones setting it up for us so at least we'll know right away if they're compentent or not :) Regards, Mikael -----Original Message----- From: Alex Hayward [mailto:xelah@xxxxxxxxxxxxxxxxxxxxxxxx] On Behalf Of Alex Hayward Sent: den 21 april 2006 17:25 To: Mikael Carneholm Cc: Pgsql performance Subject: Re: [PERFORM] Hardware: HP StorageWorks MSA 1500 On Thu, 20 Apr 2006, Mikael Carneholm wrote: > We're going to get one for evaluation next week (equipped with dual > 2Gbit HBA:s and 2x14 disks, iirc). Anyone with experience from them, > performance wise? We (Seatbooker) use one. It works well enough. Here's a sample bonnie output: -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 16384 41464 30.6 41393 10.0 16287 3.7 92433 83.2 119608 18.3 674.0 0.8 which is hardly bad (on a four 15kRPM disk RAID 10 with 2Gbps FC). Sequential scans on a table produce about 40MB/s of IO with the 'disk' something like 60-70% busy according to FreeBSD's systat. Here's diskinfo -cvt output on a not quite idle system: /dev/da1 512 # sectorsize 59054899200 # mediasize in bytes (55G) 115341600 # mediasize in sectors 7179 # Cylinders according to firmware. 255 # Heads according to firmware. 63 # Sectors according to firmware. I/O command overhead: time to read 10MB block 0.279395 sec = 0.014 msec/sector time to read 20480 sectors 11.864934 sec = 0.579 msec/sector calculated command overhead = 0.566 msec/sector Seek times: Full stroke: 250 iter in 0.836808 sec = 3.347 msec Half stroke: 250 iter in 0.861196 sec = 3.445 msec Quarter stroke: 500 iter in 1.415700 sec = 2.831 msec Short forward: 400 iter in 0.586330 sec = 1.466 msec Short backward: 400 iter in 1.365257 sec = 3.413 msec Seq outer: 2048 iter in 1.184569 sec = 0.578 msec Seq inner: 2048 iter in 1.184158 sec = 0.578 msec Transfer rates: outside: 102400 kbytes in 1.367903 sec = 74859 kbytes/sec middle: 102400 kbytes in 1.472451 sec = 69544 kbytes/sec inside: 102400 kbytes in 1.521503 sec = 67302 kbytes/sec It (or any FC SAN, for that matter) isn't an especially cheap way to get storage. You don't get much option if you have an HP blade enclosure, though. HP's support was poor. Their Indian call-centre seems not to know much about them and spectacularly failed to tell us if and how we could connect this (with the 2/3-port FC hub option) to two of our blade servers, one of which was one of the 'half-height' ones which require an arbitrated loop. We ended up buying a FC switch.