One concern I have about these trigger based replication systems is that I fear it may ping the slave for each and every DML statement separately in time and in a transaction. My slave will literally be 1400 miles away and all replication communications
will be over the net. If I have a transaction which has 1000 DML statements in it, is this thing going to update the slave 1000 times separately over the net ? (I may not live long enough to see it finish) Or will it be smart enough to wait until I "commit"
then send over a single bundle of 1000 DML? The time diff will be more than significant.
Thanks for all the great input on this!
-----Original Message-----
From: pgsql-general-owner@xxxxxxxxxxxxxx [mailto:pgsql-general-owner@xxxxxxxxxxxxxx] On Behalf Of Greg Sabino Mullane
Sent: Thursday, December 17, 2009 11:58 AM
To: pgsql-general@xxxxxxxxxxxxxx
Subject: Re: Justifying a PG over MySQL approach to a project
From: pgsql-general-owner@xxxxxxxxxxxxxx [mailto:pgsql-general-owner@xxxxxxxxxxxxxx] On Behalf Of Greg Sabino Mullane
Sent: Thursday, December 17, 2009 11:58 AM
To: pgsql-general@xxxxxxxxxxxxxx
Subject: Re: Justifying a PG over MySQL approach to a project
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160
> How difficult is it to switch the master's hat from one DB instance
> to another? Let's say the master in a master-slave scenario goes
> down but the slave is fine. Can I designate the slave as being the
> new master, use it for read/write, and then just call the broken
> master the new slave once it comes back to life (something like that)?
Sure. Bucardo slaves are not changed at all, so they are already
read/write and don't need anything special done to "unslave" them.
One possible way to handle the scenario is:
Assuming three servers:
* A (master) sends changes to B, receives read/write queries
* B (slave) has transaction_read_only set to true, receives read queries
* C has the Bucardo database and daemon
Box A goes down suddenly.
* Stop Bucardo on box C
* Flip the boxes around in the bucardo.db table
* Do a 'bucardo_ctl validate sync all'
(This will create the needed triggers on B)
* Set B's transaction_read_only to false
* Point your apps at B instead of A for read/write queries
When A comes back up:
* DROP SCHEMA bucardo CASCADE; (drops all triggers)
* Set transaction_read_only to true
* Start Bucardo on C
* Once caught up, point read-only queries to A
If you are in a rush, you point things to B immediately after A fails,
but you'll have to recopy the entire table data to the slave, as the
triggers won't be in place yet.
- --
Greg Sabino Mullane greg@xxxxxxxxxxxx
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 200912171153
-----BEGIN PGP SIGNATURE-----
iEYEAREDAAYFAksqYqUACgkQvJuQZxSWSsjZtQCfTwbI3f9W0z+82IU7lL+2LwNK
aUYAnj3AMjRDOeFIuHDee4JJemneArie
=75Ho
-----END PGP SIGNATURE-----
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription: