Tom --
Thanks for the pointers and advice. We've started by doubling
max_locks and halving shared_buffers, we'll see how it goes.
Brian
On Oct 10, 2009, at 7:56 PM, Tom Lane wrote:
Brian Karlak <zenkat@xxxxxxxxxxx> writes:
"out of shared memory HINT: You might need to increase
max_locks_per_transaction"
You want to do what it says ...
1) We've already tuned postgres to use ~2BG of shared memory -- which
is SHMAX for our kernel. If I try to increase
max_locks_per_transaction, postgres will not start because our shared
memory is exceeding SHMAX. How can I increase
max_locks_per_transaction without having my shared memory
requirements
increase?
Back off shared_buffers a bit? 2GB is certainly more than enough
to run Postgres in.
2) Why do I need locks for all of my subtables, anyways? I have
constraint_exclusion on. The query planner tells me that I am only
using three tables for the queries that are failing. Why are all of
the locks getting allocated?
Because the planner has to look at all the subtables and make sure
that they in fact don't match the query. So it takes AccessShareLock
on each one, which is the minimum strength lock needed to be sure that
the table definition isn't changing underneath you. Without *some*
lock
it's not really safe to examine the table at all.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance