Thanks for including your configuration
and version; it makes things much easier.
Reply follows inline. On 11/06/2012 09:04 PM, Bryan Montgomery wrote:
See the auto_explain contrib module. It can explain statements within functions, as well as the functions themselves. http://www.postgresql.org/docs/current/static/auto-explain.html Get a connection pooler. Urgently. See http://wiki.postgresql.org/wiki/PgBouncer . It is extremely unlikely that your server is running efficiently with that many concurrent connections actively working. Reducing it to (say) 100 and using transaction-level connection pooling may boost performance significantly. That's really dangerous with your connection count. If many connections actually use that, you'll run out of RAM in a hurry and enter nasty paging storm. If possible, reduce it, then raise it selectively in transactions where you know a high work_mem is needed.
So you don't value your data and don't mind if you lose all of it, permanently and unrecoverably, if your server loses power or the host OS hard crashes? It's much safer to use `synchronous_commit = off` and a commit_delay. If that isn't enough, get fast-flushing storage like a good raid controller with a battery backed cache you can put in write-back mode, or some high quality SSDs with power-protected write caches. As above: I hope your data isn't important to you. -- Craig Ringer |