Search Postgresql Archives

Re: Strategy for moving a large DB to another machine with least possible down-time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/21/2014 06:36 AM, Andreas Joseph Krogh wrote:
Hi all.
PG-version: 9.3.5
I have a DB large enough for it to be impractical to pg_dump/restore it (would require too much down-time for customer). Note that I'm noe able to move the whole cluster, only *one* DB in that cluster.
What is the best way to perform such a move, can i use PITR, rsync + webl-replay magic, what else?
Can Barman help with this, maybe?
Thanks.
--
*Andreas Joseph Krogh*
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andreas@xxxxxxxxxx <mailto:andreas@xxxxxxxxxx>
www.visena.com <https://www.visena.com>
<https://www.visena.com>

I had a less big'sih table I wanted to move, but not everything else.  I had a timestamp on the table I could use for "close enough to unique".  I wrote a perl script that would dump 100K records at a time (ordered by the timestamp).  It would dump records and then disconnect and sleep for 30 seconds'ish which kept usage low.

It took a while, but once it caught up, I changed the script to get the max(timestamp) from olddb and newdb and only copy the missing ones.  I could keep them in sync this way until I was ready to switch over.

-Andy


--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux