On 30/05/11 20:41, Cédric Villemain wrote:
2011/5/30 Toby Corkindale<toby.corkindale@xxxxxxxxxxxxxxxxxxxx>:
On 28/05/11 18:42, Carl von Clausewitz wrote:
a few months ago, when I installed my first PostgreSQL, I have had the
same problem. I've try to get any information about optimal memory
config, and working, but there wasn't any "optimal memory setting
calculator" on the internet, just some guide in the posgre documentation
(http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html#SYSVIPC).
I got FreeBSD 8.2 AMD64, with 8 GB of memory (this server is just for
PostgreSQL and a little PHP app with 2 user), and I have theese setting
in postgresql.conf (which are not the default):
[snip]
work_mem = 64MB# min 64kB
maintenance_work_mem = 1024MB# min 1MB
max_stack_depth = 64MB# min 100kB
Just a warning - but be careful about setting work_mem to high values.
The actual memory used by a query can be many times the value, depending on
the complexity of your query.
In a particular query I saw last week, we were regularly exceeding the
available memory on a server, because the query was requiring 80 times the
value of work_mem, and work_mem had been set to a high value.
Reducing work_mem back to just 4MB reduced memory usage by a couple of
gigabytes, and had almost no effect on the execution time. (Actually, it was
marginally faster - probably because more memory was left for the operating
system's cache)
Maybe, you're also aware that linux may decide to swap to protect its
buffer cache (depend of the strategy it got in its configuration) and
also that you may be limited by commitable memory. On a default
install where the swap is NOT at least twice the RAM size, you're not
able to commit all RAM you have. But, it protects the buffer cache for
the not allocatable memory.
So maybe you've hitten a step where you did swap your work_mem...
anyway interesting to have a query where a large work_mem is not
better... Will it be hard to isolate the case and make it public ? In
the long term it might be a good test to add to a performance farm if
it is not based on a non-optimum linux configuration (I mean if the
issue *need* the work_mem to be reduced to be fixed).
In this case, it was not just slowing down due to the amount of work_mem
allocated -- it was exceeding several gigabytes of memory usage and
crashing out. Lower values of work_mem allowed the query to succeed, but
it used almost 3G.. Even lower values of work_mem could do the query in
only a few hundred MB - and was faster.
I note that if you exceed work_mem in a query,then I guess the temp
files created are cached by the VM cache, so it's not like the
performance hit will be *too* bad?
I agree that the slowness of the 3GB version could be due to swapping or
something like that.. or just due to the VM cache being eliminated as I
suggested.
Either way - the problem was that this (machine-generated) query was
pivoting and joining many views-of-views. It's a pretty nasty query.
The key fact is that postgres (8.3) seems to allocate the full work_mem
amount every time it needs *some* work_mem - even if it could have
happily got by on just a few MB. So if your query allocates work_mem a
hundred times, it'll consume $work_mem * 100 -- or die trying.
I'm curious to know if Postgres 9.0 has improved this -- I'm going to
try re-running this query on it once I get a chance, but due to
contractual agreements this isn't quite as simple to test as you might
think.
(And running the test over a much smaller example data set might not
trigger the same query plan)
I'll get there eventually though :)
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general