Richard Neill wrote:
Likewise, is there any way to check whether, for example, postgres is
running out of work memory?
It doesn't work like that; it's not an allocation. What happens is that
the optimizer estimates how much memory a sort is going to need, and
then uses work_mem to decide whether that is something it can do in RAM
or something that needs to be done via a more expensive disk-based
sorting method. You can tell if it's not set high enough by toggling on
log_temp_files and watching when those get created--those appear when
sorts bigger than work_mem need to be done.
commit_delay = 50000 # range 0-100000, in microseconds
commit_siblings = 5 # range 1-1000
Random note: that is way too high of a value for commit_delay. It's
unlikely to be helping you, and might be hurting sometimes. The whole
commit_delay feature is quite difficult to tune correctly, and is really
only useful for situations where there's really heavy writing going on
and you want to carefully tweak write chunking size. The useful range
for commit_delay is very small even in that situation, 50K is way too
high. I'd recommend changing this back to the default, if you're not at
the point where you're running your own benchmarks to prove the
parameter is useful to you it's not something you should try to adjust.
--
Greg Smith 2ndQuadrant Baltimore, MD
PostgreSQL Training, Services and Support
greg@xxxxxxxxxxxxxxx www.2ndQuadrant.com
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance