Search Postgresql Archives

Re: Postgresql replication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It provides pseudo relief if all your servers are in the same building. Having a front-end pgpool connector pointing to servers across the world is not workable -- performance ends up being completely decrepit due to the high latency.

Which is the problem we face. Great, you've got multiple servers for failover. Too bad it doesn't do much good if your building gets hit by fire/earthquake/hurricane/etc.


Aly Dharshi wrote:
I know I am wadding into this discussion as an beginner compared to the rest who have answered this thread, but doesn't something like pgpool provide relief for pseudo-multimaster replication, and what about software like sqlrelay wouldn't these suites help to some extent ? Looking forward to be enlightened.

Cheers,

Aly.

William Yu wrote:

Carlos Henrique Reimer wrote:

I read some documents about replication and realized that if you plan on using asynchronous replication, your application should be designed from the outset with that in mind because asynchronous replication is not something that can be easily “added on” after the fact.


Yes, it requires a lot foresight to do multi-master replication -- especially across high latency connections. I do that now for 2 different projects. We have servers across the country replicating data every X minutes with custom app logic resolves conflicting data.

Allocation of unique IDs that don't collide across servers is a must. For 1 project, instead of using numeric IDs, we using CHAR and pre-append a unique server code so record #1 on server A is A0000000001 versus ?x0000000001 on other servers. For the other project, we were too far along in development to change all our numerics into chars so we wrote custom sequence logic to divide our 10billion ID space into 1-Xbillion for server 1, X-Ybillion for server 2, etc.

With this step taken, we then had to isolate (1) transactions could run on any server w/o issue (where we always take the newest record), (2) transactions required an amalgam of all actions and (3) transactions had to be limited to "home" servers. Record keeping stuff where we keep a running history of all changes fell into the first category. It would have been no different than 2 users on the same server updating the same object at different times during the day. Updating of summary data fell into category #2 and required parsing change history of individual elements. Category #3 would be financial transactions requiring strict locks were be divided up by client/user space and restricted to the user's home server. This case would not allow auto-failover. Instead, it would require some prolonged threshold of downtime for a server before full financials are allowed on backup servers.

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to majordomo@xxxxxxxxxxxxxx so that your
      message can get through to the mailing list cleanly

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux