On Apr 6, 2010, at 9:49 AM, Ireneusz Pluta wrote: > Greg Smith pisze: >> >> The MegaRAID SAS 84* cards have worked extremely well for me in terms >> of performance and features for all the systems I've seen them >> installed in. I'd consider it a modest upgrade from that 3ware card, >> speed wise. > OK, sounds promising. >> The main issue with the MegaRAID cards is that you will have to write >> a lot of your own custom scripts to monitor for failures using their >> painful MegaCLI utility, and under FreeBSD that also requires using >> their Linux utility via emulation: >> http://www.freebsdsoftware.org/sysutils/linux-megacli.html >> > And this is what worries me, as I prefer not to play with utilities too > much, but put the hardware into production, instead. So I'd like to find > more precisely if expected speed boost would pay enough for that pain. > Let me ask the following way then, if such a question makes much sense > with the data I provide. I already have another box with 3ware > 9650SE-16ML. With the array configured as follows: > RAID-10, 14 x 500GB Seagate ST3500320NS, stripe size 256K, 16GB RAM, > Xeon X5355, write caching enabled, BBU, FreeBSD 7.2, ufs, > when testing with bonnie++ on idle machine, I got sequential block > read/write around 320MB/290MB and random seeks around 660. > > Would that result be substantially better with LSI MegaRAID? > My experiences with the 3ware 9650 on linux are similar -- horribly slow for some reason with raid 10 on larger arrays. Others have claimed this card performs well on FreeBSD, but the above looks just as bad as Linux. 660 iops is slow for 14 spindles of any type, although the raid 10 on might limit it to an effective 7 spindles on reading in which case its OK -- but should still top 100 iops per effective disk on 7200rpm drives unless the effective concurrency of the benchmark is low. My experience with the 9650 was that iops was OK, but sequential performance for raid 10 was very poor. On linux, I was able to get better sequential read performance like this: * set it up as 3 raid 10 blocks, each 4 drives (2 others spare or for xlog or something). Software RAID-0 these RAID 10 chunks together in the OS. * Change the linux 'readahead' block device parameter to at least 4MB (8192, see blockdev --setra) -- I don't know if there is a FreeBSD equivalent. A better raid card you should hit at minimum 800, if not 1000, MB/sec + depending on whether you bottleneck on your PCIe or SATA ports or not. I switched to two adaptec 5xx5 series cards (each with half the disks, software raid-0 between them) to get about 1200MB/sec max throughput and 2000iops from two sets of 10 Seagate STxxxxxxxNS 1TB drives. That is still not as good as it should be, but much better. FWIW, one set of 8 drives in raid 10 on the adaptec did about 750MB/sec sequential and ~950 iops read. It required XFS to do this, ext3 was 20% slower in throughput. A PERC 6 card (LSI MegaRaid clone) performed somewhere between the two. I don't like bonnie++ much, its OK at single drive tests but not as good at larger arrays. If you have time try fio, and create some custom profiles. Lastly, for these sorts of tests partition your array in smaller chunks so that you can reliably test the front or back of the drive. Sequential speed at the front of a typical 3.5" drive is about 2x as fast as at the end of the drive. > > -- > Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance