Re: Hardware/OS recommendations for large databases (

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Alan,

On 11/18/05 5:41 AM, "Alan Stange" <stange@xxxxxxxxxx> wrote:

> 
> That's interesting, as I occasionally see more than 110MB/s of
> postgresql IO on our system.  I'm using a 32KB block size, which has
> been a huge win in performance for our usage patterns.   300GB database
> with a lot of turnover.  A vacuum analyze now takes about 3 hours, which
> is much shorter than before.  Postgresql 8.1, dual opteron, 8GB memory,
> Linux 2.6.11, FC drives.

300GB / 3 hours = 27MB/s.

If you are using the 2.6 linux kernel, you may be fooled into thinking you
burst more than you actually get in net I/O because the I/O stats changed in
tools like iostat and vmstat.

The only meaningful stats are (size of data) / (time to process data).  Do a
sequential scan of one of your large tables that you know the size of, then
divide by the run time and report it.

I'm compiling some new test data to make my point now.

Regards,

- Luke



---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux