On Tue, Sep 9, 2008 at 7:59 AM, Andre Brandt <brandt@xxxxxxxxx> wrote: > Hi out there, > > I've some little questions, perhaps you can help me... > > So in order to get rid of wait I/O (as far as possible), we have to > increase the I/O performance. Because of there are a lot storage systems > out there, we need to know how many I/O's per second we actually need. > (To decide, whether a storage systems can handle our load or a bigger > system is required. ) Do you have some suggestions, how to measure that? > Do you have experience with postgres on something like HP MSA2000(10-20 > disks) or RamSan systems? Generally the best bang for the buck is with Direct Attached Storage system with a high quality RAID controllers, like the 3Ware or Areca or LSI or HPs 800 series. I've heard a few good reports on higher end adaptecs, but most adaptec RAID controllers are pretty poor db performers. To get an idea of how much I/O you'll need, you need to see how much you use now. A good way to do that is to come up with a realistic benchmark and run it at a low level of concurrency on your current system, while running iostat and / or vmstat in the background. pidstat can be pretty useful too. Run a LONG benchmark so it averages out, you don't want to rely on a 5 minute benchmark. Once you have some base numbers, increase the scaling factor (i.e. number of threads under test) and measure I/O and CPU etc for that test. Now, figure out how high a load factor you'd need to run your full load and multiply that times your 1x benchmark's I/O numbers, plus a fudge factor or 2 to 10 times for overhead. The standard way to hand more IO ops per second is to add spindles. It might take more than one RAID controller or external RAID enclosure to meet your needs.