Search Postgresql Archives

Re: Replicating databases

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It doesnt sound to me like replication is the right answer to this 
problem... You are setting yourself up to try and defeat one of the major 
purposes of a database in a client-server system -- namely -- centralized 
storage.

If you add up all the money you are going to spend trying to manage multiple 
copies of the same database along with all of the maintenance, support, 
bandwidth, and costs of not allowing your end-users access to "real-time" 
data (like making bad decisions based on aged data) -- I think you will 
agree that it could end up being VERY expensive in the long term.

The part of your plan where you intend to synchronize all of the databases 
overnight is still going to be a bottleneck.

A better alternative -- put some money into upgrading your bandwidth --  
especially at the postgreSQL server end -- not necessarily at each location.

FWIW: I have a client with 472+ stores each using a 56K (fractional T1 pipe) 
connection to a central postgreSQL server. They dont have any major 
performance problems that I am aware of. If they did -- I can pretty much 
guarantee that distributing 472 copies of the database would never ever be 
considered as a "solution" to improve performance.



"Carlos Benkendorf" <carlosbenkendorf@xxxxxxxxxxxx> wrote in message 
news:20051102120637.58061.qmail@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Hello,

Currently our company has a lot of small stores distributed around the 
country and in the actual database configuration we have a central database 
and all the small stores accessing it remotely.

All primary key tables were designed with a column identifying the store 
that it belongs. In other words, the store that can update the line, other 
stores can read it but the system was designed in such a way that other 
stores can not update information that do not belong to them.

The performance is not good because the line speed that connects the store 
to the central database sometimes is overloaded. We´re thinking to replicate 
the central database to each store. The store would be able to read all the 
information from the local database but should only update lines that belong 
to that store.

When a store needs read information about other stores, it is not necessary 
to be updated, it can be a yesterday snapshot.

During the night all the local store databases will be consolidated in only 
one database and replicated again to the stores. In the morning, when the 
store opens, the local database has an updated and consolidated data.
I would appreciate suggestions about how the best way to implement such 
soluction.

Slony-1? SQL scripts?

Thanks in advance!

Benkendorf
__________________________________________________
Faça ligações para outros computadores com o novo Yahoo! Messenger
http://br.beta.messenger.yahoo.com/ 



---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux