On Tue, Nov 6, 2012 at 8:54 PM, Craig Ringer <ringerc@xxxxxxxxxxxxx> wrote:
Thanks for including your configuration and version; it makes things much easier.
Reply follows inline.
On 11/06/2012 09:04 PM, Bryan Montgomery wrote:
See the auto_explain contrib module. It can explain statements within functions, as well as the functions themselves.I'm wondering what general ways there are to monitor and improve performance? We look at pgadmin's server status but that only sees the function being run. Additionally, is there a good way to 'explain' a function? Or do you have to execute the function steps individually and explain those?
http://www.postgresql.org/docs/current/static/auto-explain.htmlGet a connection pooler. Urgently. See http://wiki.postgresql.org/wiki/PgBouncer . It is extremely unlikely that your server is running efficiently with that many concurrent connections actively working. Reducing it to (say) 100 and using transaction-level connection pooling may boost performance significantly.
The server typically has up to 500 connections with a max of 750 connections.
That's really dangerous with your connection count. If many connections actually use that, you'll run out of RAM in a hurry and enter nasty paging storm. If possible, reduce it, then raise it selectively in transactions where you know a high work_mem is needed.work_mem = 512MB # min 64kB
So you don't value your data and don't mind if you lose all of it, permanently and unrecoverably, if your server loses power or the host OS hard crashes?fsync = off # turns forced synchronization on or off
It's much safer to use `synchronous_commit = off` and a commit_delay. If that isn't enough, get fast-flushing storage like a good raid controller with a battery backed cache you can put in write-back mode, or some high quality SSDs with power-protected write caches.
As above: I hope your data isn't important to you.full_page_writes = off # recover from partial page writes
--
Craig Ringer