"Sorin N. Ciolofan" <ciolofan@xxxxxxxxxxxx> writes: > I will simplify the things in order to describe when the error occurred: > The input of the application is some data which is read from files on disk, > processed and then inserted in the database in one transaction. This total > quantity of data represents an integer number of data files, n*q, where q is > a file which has always 60kb and n is the positive integer. > For n=23 and shared_buffers=1000 and max_locks_per_transaction=64 the > Postgres throws the following exception: > org.postgresql.util.PSQLException: ERROR: out of shared memory > For n=23 I estimated that we create and manipulate about 8000 tables. Okay, as far as I know the only user-causable way to get that message is to run out of lock-table space, and a transaction does take a lock for each table it touches, so I concur that raising max_locks_per_transaction is an appropriate response. If you didn't see any change in the maximum N you could handle then I wonder whether you actually did raise it --- does "show max_locks_per_transaction" reflect the intended new value? Another possibility is that there's something about your code that makes the number of locks involved very nonlinear in N. You could try checking the number of rows in pg_locks immediately before commit at some smaller values of N to confirm what the scaling is really like. regards, tom lane