Hi *, for our project we use big transactions for application coordination. As we need to recover from failed statements on statement level we encapsulate our statements with SAVEPOINT/RELEASE savepoint statements. So far this seems to be the only way to recover from failed statements and to keep the enclosing transaction intact. Things we observed: 1) For transactions with many statements we observe continuously increasing memory consumption in the backend. 2) Along with it the throughput decreases during the course of statement processing. The processing is CPU bound during that time until the transaction finishes or rather aborts. 3) We seldom see the entire transaction commit as it is ABORTED due to an out of memory condition before the application can commit. 4) The backend server log shows MANY (in our case 15805) entries of the form: CurTransactionContext: 8192 total in 1 blocks; 8176 free (2 chunks); 16 used This is a resource utilization of 0.2%. Of the 128Mb allocated data size the the backend process 96% is used for an effective data amount of 252880 bytes. 5) Looking into the source READMEs (/backend/utils/mmgr/README, backend/access/transam/README) it looks like it is intentional that the backend allocates and keeps memory for each started and commited sub-transaction until the transaction is finally commited at top level. Questions: 1) Is there another way to cope with failed statements without using savepoint and without aborting the entire transaction? 2) Would it be possible to let the parent subtransaction adopt the state that is currently being saved in the memory allocated for the commited child subtransaction? If so, it would dramatically enhance resource efficiency and possibly improve performance by reducing adminitrative overhead. Regards and thanks in advance, Frank Kardel & Ansgar Seiter ---------------------------(end of broadcast)--------------------------- TIP 2: Don't 'kill -9' the postmaster