Thanks Tom for the explanation. I assumed it was my ignorance of how the schema was handled that was making this look like a problem that had already been solved and I was missing something. I fully expected the "You're Doing It Wrong" part. That is out of my control but not beyond my influence. I suspect I know the answer to this but have to ask. Using a simplified example where there are 100K sets of 4 tables, each representing the output of a single job, are there any shortcuts to upgrading that would circumvent exporting the entire schema? I'm sure a different DB design would be better but that's not what I'm working with. Thanks ________________________________________ From: Ron <ronljohnsonjr@xxxxxxxxx> Sent: Saturday, April 6, 2019 4:57 PM To: pgsql-general@xxxxxxxxxxxxxxxxxxxx Subject: Re: pg_upgrade --jobs On 4/6/19 6:50 PM, Tom Lane wrote: senor <frio_cervesa@xxxxxxxxxxx><mailto:frio_cervesa@xxxxxxxxxxx> writes: [snip] The --link option to pg_upgrade would be so much more useful if it weren't still bound to serially dumping the schemas of half a million tables. To be perfectly blunt, if you've got a database with half a million tables, You're Doing It Wrong. Heavy (really heavy) partitioning? -- Angular momentum makes the world go 'round.