Search Postgresql Archives

Re: migration of 100+ tables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/10/19 5:53 PM, Julie Nishimura wrote:
Hello friends, I will need to migrate 500+ tables  from one server (8.3) to another (9.3). I cannot dump and load the entire database due to storage limitations (because the source is > 20 TB, and the target is about 1.5 TB).

I was thinking about using pg_dump with customized -t flag, then use restore. The table names will be in the list, or I could dump their names in a table.  What would be your suggestions on how to do it more efficiently?

The sizes you mention above, are they for the uncompressed raw data?

Are the tables all in one schema or multiple?

Where I am going with this is pg_dump -Fc --schema.

See:
https://www.postgresql.org/docs/10/app-pgrestore.html

The pg_restore -l to get a TOC(Table of Contents).

Comment out the items you do not want in the TOC.

Then pg_restore  --use-list.

See:

https://www.postgresql.org/docs/10/app-pgrestore.html


Thank you for your ideas, this is great to have you around, guys!




--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux