Hi,
you could set effective_cache_size to a high value (free memory on your
server that is used for caching).
Christiaan Willemsen wrote:
Hi there,
I configured OpenSolaris on our OpenSolaris Machine. Specs:
2x Quad 2.6 Ghz Xeon
64 GB of memory
16x 15k5 SAS
The filesystem is configured using ZFS, and I think I have found a
configuration that performs fairly well.
I installed the standard PostgreSQL that came with the OpenSolaris
disk (8.3), and later added support for PostGIS. All fime.
I also tried to tune postgresql.conf to maximize performance and also
memory usage.
Since PostgreSQL is the only thing running on this machine, we want it
to take full advantage of the hardware. For the ZFS cache, we have 8
GB reserved. The rest can be used by postgres.
The problem is getting it to use that much. At the moment, it only
uses almost 9 GB, so by far not enough. The problem is getting it to
use more... I hope you can help me with working config.
Here are the parameters I set in the config file:
shared_buffers = 8192MB
work_mem = 128MB
maintenance_work_mem = 2048MB
max_fsm_pages = 204800
max_fsm_relations = 2000
Database is about 250 GB in size, so we really need to have as much
data as possible in memory.
I hope you can help us tweak a few parameters to make sure all memory
will be used.
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance