On 2/3/2010 9:10 AM, Amitabh Kant wrote:
Hello
I have a server dedicated for Postgres with the following specs:
RAM 16GB, 146GB SAS (15K) x 4 - RAID 10 with BBU, Dual Xeon E5345 @
2.33GHz
OS: FreeBSD 8.0
It runs multiple (approx 10) databases ranging from 500MB to over 24 GB
in size. All of them are of the same structure, and almost all of them
have very heavy read and writes.
With regards
Amitabh Kant
What problems are you having? Is it slow? Is there something you are
trying to fix, or is this just the first tune up?
> memory allocations. The last time I tried, Postgres refused to start and
> I had to fall back to the default settings.
Its probably upset about the amount of shared mem. There is probably a
way in bsd to set the max amount of shared memory available. A Quick
google turned up:
kern.ipc.shmmax
Dunno if thats right. When you try to start PG, if it cannot allocate
enough shared mem it'll spit out an error message into its log saying
how much it tried to allocate.
Check:
http://archives.postgresql.org/pgsql-admin/2004-06/msg00155.php
> maintenance_work_mem = 960MB # pg_generate_conf wizard 2010-02-03
> checkpoint_completion_target = 0.9 # pg_generate_conf wizard 2010-02-03
> effective_cache_size = 11GB # pg_generate_conf wizard 2010-02-03
> work_mem = 160MB # pg_generate_conf wizard 2010-02-03
> wal_buffers = 8MB # pg_generate_conf wizard 2010-02-03
> checkpoint_segments = 16 # pg_generate_conf wizard 2010-02-03
> shared_buffers = 3840MB # pg_generate_conf wizard 2010-02-03
> max_connections = 100 # pg_generate_conf wizard 2010-02-03
Some of these seem like too much. I'd recommend starting with one or
two and see how it runs. Then increase if you're still slow.
Start with effective_cache_size, shared_buffers and checkpoint_segments.
Wait until very last to play with work_mem and maintenance_work_mem.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance