Joshua D. Drake wrote:
PostgreSQL is only going to use what it needs. It relies on the OS for
much of the caching etc...
So that would actually mean that I could raise the setting of the ARC
cache to far more than 8 GB? As I said, our database is 250 GB, So I
would expect that postgres needs more than it is using right now...
Several tables have over 500 million records (obviously partitioned).
At the moment we are doing queries over large datasets, So I would
assume that postgress would need a bit more memory than this..
You are missing effective_cache_size. Try setting that to 32G.
That one was set to 24 GB. But this setting only tells posgres how much
caching it can expect from the OS? This is not actually memory that it
will allocate, is it?
You also didn't mention checkpoint_segments (which isn't memory but
still important) and default_statistics_target (which isn't memory but
still important).
is at the moment set to:
checkpoint_segments = 40
default_statistics_target is set to default (I think that is 10)
Thanks already,
Christiaan
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance