Vegard Bønes <vegard.bones@xxxxxx> wrote: > I tried increasing max_pred_locks_per_transaction by a factor 20, > and that seems to have helped. Good to know. After I sent my response I was hoping you wouldn't take it to imply that increating it by a factor of 10 was necessarily the useful maximum. It depends mostly on the maximum number of different pages within a single table you want to track at a finer-grained resolution than the relation level. If you have transactions which read a lot of pages from a single table, this might need to go pretty high. > is there any reason, beside memory concerns, not to have a very > high value for max_pred_locks_per_transaction? No, it is strictly a question of how much memory will be allocated to the shared memory segment for the purpose of tracking these. Well, I should say that we have some O(N^2) behavior in tracking read-write conflicts that should probably be improved -- so if a large number of predicate locks results in a large number of read-write conflicts, performance could suffer. Whether you run into that depends a lot on your workload. I have seen one report of such an effect in one academic paper which compared various techniques for implementing truly serializable transactions. The PostgreSQL implementation still scaled better than any of the benchmarked alternatives, but they did hit a point where a large percentage of the time was spent in that area. Limiting the number of active connections with a transaction-based connection pooler (like pgbouncer configured in transaction mode) is currently your best defence against hitting the wall on this issue. -- Kevin Grittner EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general