On 3/10/19 9:07 PM, Julie Nishimura wrote:
Oh, this is great news! Yay, thanks!
Just be aware, where -n = --schema:
https://www.postgresql.org/docs/10/app-pgdump.html
"
Note
When -n is specified, pg_dump makes no attempt to dump any other
database objects that the selected schema(s) might depend upon.
Therefore, there is no guarantee that the results of a specific-schema
dump can be successfully restored by themselves into a clean database.
"
So any relationships across schema will not be preserved.
------------------------------------------------------------------------
*From:* Adrian Klaver <adrian.klaver@xxxxxxxxxxx>
*Sent:* Sunday, March 10, 2019 6:28 PM
*To:* Julie Nishimura; pgsql-general@xxxxxxxxxxxxxxxxxxxx
*Subject:* Re: migration of 100+ tables
On 3/10/19 5:53 PM, Julie Nishimura wrote:
Hello friends, I will need to migrate 500+ tables from one server (8.3)
to another (9.3). I cannot dump and load the entire database due to
storage limitations (because the source is > 20 TB, and the target is
about 1.5 TB).
I was thinking about using pg_dump with customized -t flag, then use
restore. The table names will be in the list, or I could dump their
names in a table. What would be your suggestions on how to do it more
efficiently?
The sizes you mention above, are they for the uncompressed raw data?
Are the tables all in one schema or multiple?
Where I am going with this is pg_dump -Fc --schema.
See:
https://www.postgresql.org/docs/10/app-pgrestore.html
The pg_restore -l to get a TOC(Table of Contents).
Comment out the items you do not want in the TOC.
Then pg_restore --use-list.
See:
https://www.postgresql.org/docs/10/app-pgrestore.html
Thank you for your ideas, this is great to have you around, guys!
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx