On 25/03/2014 13:56, Frank Foerster wrote: > > Hi, > > we are currently in the process of upgrading a production/live 1 TB > database from 9.2 to 9.3 via pg_dump, which is quite a lengthy process. > > Fortunately we have a capable spare-server so we can restore into a > clean, freshly setup machine. > > I just wondered wether the intermediate step of writing the dump-file > and re-reading it to have it written to the database is really > necessary. Is there any way to "pipe" the dump-file directly into the > new database-process or would such functionality make sense ? Surely: pg_dump [...etc...] | psql [...etc...] Though I'm sure it will still take a long time for a database of that size. Another option to explore would be to use Slony, which can replicate databases between different Postgres versions - one of its design use-cases is to perform upgrades like this with a minimum of down-time. You can replicate the database over to the new server, and then switching need take only seconds once the new one is ready. Ray. -- Raymond O'Donnell :: Galway :: Ireland rod@xxxxxx -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general