Re: Dump/Reload pg_statistic to cut time from pg_upgrade?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jerry Sievers <gsievers19@xxxxxxxxxxx> writes:
> Kevin Grittner <kgrittn@xxxxxxxxx> writes:
>> Jerry Sievers <gsievers19@xxxxxxxxxxx> wrote:
>>> Planning to pg_upgrade some large (3TB) clusters using hard link
>>> method.  Run time for the upgrade itself takes around 5 minutes.
>>> Unfortunately the post-upgrade analyze of the entire cluster is going
>>> to take a minimum of 1.5 hours running several threads to analyze all
>>> tables.  This was measured in an R&D environment.

At least for some combinations of source and destination server
versions, it seems like it ought to be possible for pg_upgrade to just
move the old cluster's pg_statistic tables over to the new, as though
they were user data.  pg_upgrade takes pains to preserve relation OIDs
and attnums, so the key values should be compatible.  Except in
releases where we've added physical columns to pg_statistic or made a
non-backward-compatible redefinition of statistics meanings, it seems
like this should Just Work.  In cases where it doesn't work, pg_dump
and reload of that table would not work either (even without the
anyarray problem).

			regards, tom lane


-- 
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux