Admins; Planning to pg_upgrade some large (3TB) clusters using hard link method. Run time for the upgrade itself takes around 5 minutes. Nice!! Origin version 8.4 and destination version 9.1. Unfortunately the post-upgrade analyze of the entire cluster is going to take a minimum of 1.5 hours running several threads to analyze all tables. This was measured in an R&D environment. Got to thinking, what if we make a SAN snapshot a few hours prior to upgrade time, upgrade that... then analyze it and then dump the stats table? The assumption is that the DB will not have churned very much between then and when the real upgrade is performed and the stats saved from a few hours earlier might be good enough to go live with. Once the real upgrade is done we'd populate the stats table of it using the saved stats. I realize that there may be rows in the reloaded stats table corresponding to temporary objects that may not exist then, and would have to deal with same. I'd immediately begin analyzing the tables as to have the best stats once this is complete but being able to turn out a working production instance in like 10 minutes rather than 2 hours is the goal here. Anyway, perhaps there are other good reasons I should *not* attempt this but it turns out that the stats table can't be reloaded with it's own dump so this of course is a show-stopper. psql:d:456: ERROR: cannot accept a value of type anyarray CONTEXT: COPY pg_statistic, line 1, column stavalues1: "{_assets,_income,_liabilities,assets,income,liabilities}" Setting allow_system_table_mods to 'on' is required also prior to attempting the import. Has anyone else attempted anything similar? Any feedback is appreciated. Thanks -- Jerry Sievers Postgres DBA/Development Consulting e: postgres.consulting@xxxxxxxxxxx p: 312.241.7800 -- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin