At 06:24 PM 1/28/2007, Josh Berkus wrote:
John,
> -work_mem
Depends on the number of concurrent queries you expect to run and what size
sorts you expect them to do.
EXPLAIN ANALYZE is your friend. It will tell you how much data each
query is manipulating and therefore how much memory each query will chew.
The next step is to figure out how many of each query will be running
concurrently.
Summing those will tell you the maximum work_mem each kind of query
will be capable of using.
If you have a deep enough understanding of how your pg system is
working, then you can set work_mem on a per query basis to get the
most efficient use of the RAM in your system.
> -maintenance_work_mem - 50% of the largest table?
Actually, in current code I've found that anything over 256mb
doesn't actually
get used.
Is this considered a bug? When will this limit go away? Does
work_mem have a similar limit?
> -shared_buffers - max value 50000
Actually, I need to update that. On newer faster multi-core
machines you may
want to allocate up to 1GB of shared buffers.
> -effective_cache_size - max 2/3 of available ram, ie 24GB on the
> hardware described above
Yes.
Why? "max of 2/3 of available RAM" sounds a bit
hand-wavy. Especially with 32gb, 64GB, and 128GB systems available.
Is there are hidden effective or hard limit here as well?
For a dedicated pg machine, I'd assume one would want to be very
aggressive about configuring the kernel, minimizing superfluous
services, and configuring memory use so that absolutely as much as
possible is being used by pg and in the most intelligent way given
one's specific pg usage scenario.
> -shmmax - how large dare I set this value on dedicated postgres servers?
Set it to 2GB and you'll be covered.
I thought that on 32b systems the 2GB shmmax limit had been raised to 4GB?
and that there essentially is no limit to shmmax on 64b systems?
What are Oracle and EnterpriseDB recommending for shmmax these days?
My random thoughts,
Ron Peacetree