Search Postgresql Archives

Re: Fastest way to duplicate a quite large database

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/13/2016 06:58 AM, Edson Richter wrote:



Another trouble I've found: I've used "pg_dump" and "pg_restore" to
create the new CustomerTest database in my cluster. Immediately,
replication started to replicate the 60Gb data into slave, causing big
trouble.
Does mark it as "template" avoids replication of that "copied" database?
How can I mark a database to "do not replicate"?

With the Postgres built in binary replication you can't, it replicates the entire cluster. There are third party solutions that offer that choice:

http://www.postgresql.org/docs/9.5/interactive/different-replication-solutions.html

Table 25-1. High Availability, Load Balancing, and Replication Feature Matrix


It has been mentioned before, running a non-production database on the same cluster as the production database is a generally not a good idea. Per previous suggestions I would host your CustomerTest database on another instance/cluster of Postgres listening on a different port. Then all you customers have to do is create a connection that points at the new port.


Thanks,

Edson




--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx


--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux