This is the syntax:
/usr/lib/postgresql15/bin/pg_upgrade -r -v -p 5431 -P 5432 -k -j 8
On 24/07/2023 14:52, Scott Ribe wrote:
On Jul 24, 2023, at 12:38 AM, Jef Mortelle <jefmortelle@xxxxxxxxx> wrote:
For some reason Postgres creates a new subdirectory for each PG version (I make use of tablespaces for each database in my PG cluster), also with using the link option.
So after some upgrade, it ends in a really mess with directory's?
At the end of pg_upgrade, you can start up the old version against the old directory, or the new version against the new directory. (With --link, only until writing into the db, then you are committed to the running version.) Once you are comfortable that everything is good with the new version, you should delete the old data. Alternatively, if there is a problem forcing you back to the old version, you delete the new data.
=> pg_dump schema_only, after RAM upgrade from 8GB up to 64GB (otherwise the query against pg_largeobject ends in a OUT of Memory error) runs in about 3-4 minutes
=> pg_restore takes 7 hours, which is 99% used for executing the query like: SELECT pg_catalog.lo_unlink('oid');
Given the tests you've run, it seems to me that it is doing something which it ought not when using --link.
Database is 95GB, so not so big ;-) but have ~25miljon large objects in it.
I suppose the use of large objects here is an artifact of support for other databases which have much lower limits on varchar column length.