On Wed, Jul 15, 2009 at 12:05 PM, John R Pierce<pierce@xxxxxxxxxxxx> wrote: > we have an app thats doing massive amounts of inserts, batched in > transactions, multiple concurrent connections (tuned for optimal throughput, > usually around 1 thread per cpu core plus a couple more). occasionally a > transaction gets duplicated, and that causes a constraint violation which > causes the whole transaction to abort unless we wrap each insert in a > savepoint. > > my developers are asking me if there are limits as to how many savepoints > can be active, etc. they have run into various such limits in oracle. To add to what Alvaro said, savepoints are't free. If you savepoint every single insert, you'll wind up begin much much slower than if you can insert a few dozen to a few hundred at a time. So, you might be better off bunching up a few hundred inserts at a time, and when one of the batches of 100 fail then fall back to breaking it into pieces and pushing them in until you find the one that doesn't work and spit it out into a log. It's more work on the app end, but by putting a hundred or so inserts together you get much better performance than savepointing each one. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general