On Fri, Nov 9, 2012 at 8:43 AM, Eliot Gable <egable+pgsql-general@xxxxxxxxx> wrote: > I have a PGSQL 9.0.1 database which is on the back-end of an app I was > stress testing last night. This morning, when I try to run psql, I get: > > psql: FATAL: out of shared memory > HINT: You might need to increase max_locks_per_transaction. > > I believe something in the app is failing to release resources -- maybe a > 3-way or 4-way deadlock between writing to tables inside the triggers in > PGSQL or a deadlock between multiple processes talking to the app and the > database or something leaking in the app itself which is causing locks to > not be freed. most likely possibility you have a transaction being left open and accumulating locks. of course, you have to rule out the fact that you simply have to increase max_locks_per_transaction: if you have a lot of tables, it might be reasonable to have to extend this on a stock config. one thing that can cause this unfortunately is advisory locks eating up exactly the amount of shared memory you have. that's another thing to rule out. > How do I track down what is going on if I cannot even run psql to get into > the DB to run troubleshooting queries? it's a nasty problem. if shared memory is exhausted and stuck i think the only practical think to do is to restart the database or nuking attached clients. maybe try restarting the test, but keep an open session *with an open transaction* that has previously queried both pg_locks and pg_stat_activity. it's just a hunch, but perhaps this might allow you to query said views and gather some details. merlin -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general