Dear Mr. Tom Lane, From what I've read from the postgresql.conf file I've understood that which each unit increasing of the "max_locks_per_transaction" parameter the shared memory used is also increased. But the shared memory looks to be already fully consumed according to the error message, or is the error message irrelevant and improper in this situation? With best regards, Sorin -----Original Message----- From: Tom Lane [mailto:tgl@xxxxxxxxxxxxx] Sent: Tuesday, March 27, 2007 4:59 PM To: Sorin N. Ciolofan Cc: pgsql-general@xxxxxxxxxxxxxx; pgsql-admin@xxxxxxxxxxxxxx; pgsql-performance@xxxxxxxxxxxxxx Subject: Re: [GENERAL] ERROR: out of shared memory "Sorin N. Ciolofan" <ciolofan@xxxxxxxxxxxx> writes: > It seems that the legacy application creates tables dynamically and the > number of the created tables depends on the size of the input of the > application. For the specific input which generated that error I've > estimated a number of created tables of about 4000. > Could be this the problem? If you have transactions that touch many of them within one transaction, then yup, you could be out of locktable space. Try increasing max_locks_per_transaction. regards, tom lane