Search Postgresql Archives

Re: replicate or multi-master for 9.1 or 9.2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



correction: because the single master was too BUSY

On 28 Sep 2012, at 7:48 PM, ac@xxxxxx wrote:

Hi Jon,

I have had a similar case as yours, I have one data center in Hong Kong another one in Tokyo, we have line between them, here are my feedback:

1) we used multiple masters at first, from time to time, some issues like that it took time to sync between master servers which caused obvious slowness of the entire database, moreover, it took more support resources to monitor them, the team became very tired 
2) we switched the config from multiple servers to single master, the database performance improved but we still saw DB slowness mainly because the single master was too buy
3) finally we fixed the issue by a) modify the application to handle heavy read-only accesses from local slave to reduce the loading from the master b) used the remote slave purely for remote backup c) built advanced cache system to further reduce database access

Regards
AC


On 28 Sep 2012, at 2:27 PM, Chris Travers wrote:



On Thu, Sep 27, 2012 at 9:37 PM, Jon Hancock <jhancock@xxxxxxxxxxxxxxx> wrote:
We have a new pg system on 9.1, just launched inside China.  We now know we may need to run a replicate, with some writes to it outside China.  Would like some advice.  Here are parameters:

1 - Our data center is in Beijing.  If we have a replicate in a data center in California, we can expect the bandwidth to vary between the Beijing and California servers and for any connection between the two servers to break down occasionally.  How well does pg replication work for suboptimal connects like this? 

How do you want things to work when the internet connection goes down? 

2 - Is multi-master an option to allow some writes to the otherwise slave California db?

Multi-master replication is inherently problematic.  It doesn't matter what system you are using, avoid it if you can.  The problem is that multi-master replication typically means "replicate the easy cases and let a programmer figure out what to do if anything looks a little weird."  I suppose it might work for some cases but.... 

I actually think that some sort of loose coupling usually makes better sense than multi-master replication.  I recently wrote pg_message_queue to make it easier to implement loose coupling generally.  You could, for example, send xml docs back and forth, parse those and save them into your databases.  You can't guarantee the C part of the CAP theorem (you pick A and P there), but you can guarantee local data consistency on both sides.


3 - Would trying this on 9.2 be a better place to start?  I don't think there is any reason we couldn't migrate up at this point.

The one thing in 9.2 that changes in this area is that it is designed so that if you have multiple servers on each continent, you  only replicate data once for each long haul link.  I don't think that's applicable to your case though.

Best Wishes,
Chris Travers



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux