On Tue, Jul 28, 2015 at 10:35 PM, AI Rumman <rummandba@xxxxxxxxx> wrote: > But what I read, in-place upgrade has smaller outage, compared to > dump/restore. Please, keep in mind that while the direct outage (when the database is shut down) is small, there will be a time period right after the upgrade when PostgreSQL will be rebuilding statistics. Until it's rebuilt, the planner won't be able to generate decent plans (because there will be no statistics to use, since that is cleared after the upgrade), hence, your application might run so slow that it won't be a big improvement over the time when the DB is shut down. The only way to find out how long it would take in your environment and for your dataset is to test. But so many articles on having bugs afterwards. > Do you think it is a good idea to use pg_upgrade for critical database > application? Yes. > Or any other tool should I consider? For example - slony? I think if you can afford the downtime pg_upgrade would be the easiest solution. You can use Slony, Londiste, Bucardo or any other replication solution, but they require a complex setup and restrict you to only a subset of operations possible on the database (basically, only those that can activate a trigger) until all of your data is replicated. For really small databases (up tot tens/hunderds of megabytes) dump/restore can be faster than pg_upgrade. Make sure you use the pg_dump/pg_restore from 9.4 and not 8.4 Kind regards, -- Oleksii -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general