On Sat, 2010-03-13 at 20:10 +0800, Craig Ringer wrote: > On 13/03/2010 5:54 AM, Jeff Davis wrote: > > On Fri, 2010-03-12 at 12:07 -0500, Merlin Moncure wrote: > >> of course. You can always explicitly open a transaction on the remote > >> side over dblink, do work, and commit it at the last possible moment. > >> Your transactions aren't perfectly synchronized...if you crash in the > >> precise moment between committing the remote and the local you can get > >> in trouble. The chances of this are extremely remote though. > > > > If you want a better guarantee than that, consider using 2PC. > > Translation in case you don't know: 2PC = two phase commit. > > Note that you have to monitor "lost" transactions that were prepared for > commit then abandoned by the controlling app and periodically get rid of > them or you'll start having issues. And you still have the problem of committing one 2PC transaction and then crashing before committing the other and then crashing the transaction monitor before being able to record what crashed :P, though this possibility is even more remote than just crashing between the 2 original commits (dblink and local). To get around this fundamental problem, you can actually do async queues and remember, what got replayed on the remote side, so if you have crashes on either side, you can simply replay again. > > The problem with things that are "extremely remote" possibilities are > > that they tend to be less remote than we expect ;) > > ... and they know just when they can happen despite all the odds to > maximise the pain and chaos caused. > > -- > Craig Ringer > -- Hannu Krosing http://www.2ndQuadrant.com PostgreSQL Scalability and Availability Services, Consulting and Training -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance