At 08:56 AM 3/2/2007, Carlos Moreno wrote:
Florian Weimer wrote:
* Alex Deucher:
I have noticed a strange performance regression and I'm at a loss as
to what's happening. We have a fairly large database (~16 GB).
Sorry for asking, but is this a typo? Do you mean 16 *TB* instead of
16 *GB*?
If it's really 16 GB, you should check if it's cheaper to buy more RAM
than to fiddle with the existing infrastructure.
This brings me to a related question:
Do I need to specifically configure something to take advantage of
such increase of RAM?
In particular, is the amount of things that postgres can do with RAM
limited by the amount of shared_buffers or some other parameter?
Should shared_buffers be a fixed fraction of the total amount of
physical RAM, or should it be the total amount minus half a gigabyte
or so?
As an example, if one upgrades a host from 1GB to 4GB, what would
be the right thing to do in the configuration, assuming 8.1 or 8.2? (at
least what would be the critical aspects?)
Thanks,
Carlos
Unfortunately, pg does not (yet! ;-) ) treat all available RAM as a
common pool and dynamically allocate it intelligently to each of the
various memory data structures.
So if you increase your RAM, you will have to manually change the
entries in the pg config file to take advantage of it.
(and start pg after changing it for the new config values to take effect)
The pertinent values are all those listed under "Memory" in the
annotated pg conf file: shared_buffers, work_mem, maintenance_work_mem, etc.
http://www.powerpostgresql.com/Downloads/annotated_conf_80.html
Cheers,
Ron Peacetree