On 6/2/23 18:06, Ron wrote:
On 6/2/23 19:58, Adrian Klaver wrote:
On 6/2/23 17:44, Ron wrote:
Ran into this when upgrading from 13.11 to 15.3...
The pg_restore phase failed with "ERROR: out of shared memory", and
recommended that I increase max_locks_per_transaction. Doing so let
the process run to completion.
It took 12.5 minutes to upgrade a 13GB instance. Soon after, I
upgraded a 78GB cluster, and it only took 3.1 minutes.
Where/how did you measure those sizes?
Does it really matter?
START_SECS=$(date +"%s")
pg_upgrade ...
FINISH_SECS=$(date +"%s")
ET=`echo "scale=2;(${FINISH_SECS} - ${START_SECS})/60" | bc`
date +"%F %T pg_upgrade finished. Elapsed time: ${ET} minutes."
Unless I'm not mistaken the above is how the elapsed time was measured.
I was looking for the procedure for determining the size.
(Text copied between air-gapped computers, so there might be errors.)
(Both are VMs (same number of CPUs and RAM) connected to the same SAN.)
A "pg_dump --schema-only" of the two systems shows that the
small-but-slow schema is 5.9M lines.
Anything special you are doing in this cluster to create all those lines?
I do nothing; the schema is provided by the vendor.
Alright so it is not your design, but you do have an idea of what is in
the database correct?
What is the line count for the other instance?
227K rows.
Is this to be expected of such a huge schema?
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx