On 06/09/2012 01:52 AM, Konstantin Mikhailov wrote:
Thanks alot. I've tried to play with work_mem and after few days
of the production testing pg behaves much better. See no more
files in the pgsql_tmp folder. pg processes consumes reasonable
memory, no swap operation any more. I've studied official pg
docs about work_mem an still have no idea which optimal value
work_mem should have. 1MB is obviously too small. I've increased
up to 32m. due to a lot of the sorts and hash joins in the queries.
The trouble is that the optimal work_mem depends on your workload and
hardware. Or that's my understanding, anyway.
A workload with a few simple queries that sort lots of big data might
want work_mem to be really huge (but not so huge that it causes
thrashing or pushes indexes out of cache).
A workload with lots of really complicated queries full of CTEs,
subqueries, etc might use several times work_mem per connection, and if
there are lots of connections at once might use unexpectedly large
amounts of RAM and cause thrashing or cache competition even with quite
a small work_mem.
Right now, Pg doesn't have the diagnostic tools or automatic tuning to
make it possible to determine an ideal value in any simple way, so it's
mostly a matter of examining query plans, tuning, and monitoring.
Automatic tuning of work_mem would be great, but would also probably be
_really_ hard, and still wouldn't solve the problem where n sorts can
consume n times work_mem, so you can't give complicated_query a strict
enough work_mem limit without severely starving big_simple_query or
having to run a session-local "SET work_mem" before it.
A system for auto-tuning Pg at runtime would be amazing, but also very
_very_ hard, so tweaking params based on benchmarking and examination of
runtime performance is your only real option for now.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance