Some more interesting information. The insert statement is issued with a jdbc callback to the postgres database (because the application requires partial commits...equivalent of autonomous transactions) What I noticed was that the writer process when using the jdbc insert was very active consuming a lot of memory When I attempted the same insert within pgadmin manually, the writer process was not on the top's list of processes. Wonder if the jdbc callback causes Postgres to allocate memory differently. -----Original Message----- From: Tom Lane [mailto:tgl@xxxxxxxxxxxxx] Sent: Tuesday, March 21, 2006 2:38 PM To: Sriram Dandapani Cc: pgsql-admin@xxxxxxxxxxxxxx Subject: Re: [ADMIN] out of memory error with large insert "Sriram Dandapani" <sdandapani@xxxxxxxxxxxxxxx> writes: > On a large transaction involving an insert of 8 million rows, after a > while Postgres complains of an out of memory error. If there are foreign-key checks involved, try dropping those constraints and re-creating them afterwards. Probably faster than retail checks anyway ... regards, tom lane