Tom Lane <tgl@xxxxxxxxxxxxx> wrote: > Kevin Grittner <kgrittn@xxxxxxxxx> writes: >> Dave Owens <dave@xxxxxxxxxxxxx> wrote: >>> max_connections = 450 ...we have found that we run out of shared >>> memory when max_pred_locks_per_transaction is less than 30k. > >> It gathers the information in memory to return for all those locks >> (I think both the normal heavyweight locks and the predicate locks >> do that). 450 * 30000 is 13.5 million predicate locks you could >> have, so they don't need a very big structure per lock to start >> adding up. I guess we should refactor that to use a tuplestore, so >> it can spill to disk when it gets to be more than work_mem. > > Seems to me the bigger issue is why does he need such a huge > max_pred_locks_per_transaction setting? It's hard to believe that > performance wouldn't tank with 10 million predicate locks active. > Whether you can do "select * from pg_locks" seems pretty far down > the list of concerns about this setting. It would be interesting to know more about the workload which is capable of that, but it would be a lot easier to analyze what's going on if we could look at where those locks are being used (in summary, of course -- nobody can make sense of 10 million detail lines). About all I can think to ask at this point is: how many total tables and indexes are there in all databases in this cluster (counting each partition of a partitioned table as a separate table)? With the promotion of finer-grained locks to courser ones this should be pretty hard to hit without a very large number of tables. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance