On 20/07/2023 20:34, Scott Ribe wrote:
On Jul 20, 2023, at 11:05 AM, Jef Mortelle <jefmortelle@xxxxxxxxx> wrote: so, yes pg_ugrade start a pg_dump session,Only for the schema, which you can see in the output you posted.
=> the pg_restore of this pg_dump takes about 7 hours ...
which is 99% used for executing the query like: SELECT
pg_catalog.lo_unlink('oid');
=> pg_dump schema_only, after RAM upgrade from 8GB up to 64GB (otherwise the query against pg_largeobject ends in a OUT of Memory error) runs in about 3-4 minutesGood to know, but it would be weird to have millions of large objects in a 1TB database. (Then again, I found an old post about 3M large objects taking 5.5GB...) Try: time a run of that pg_dump command, then time a run of pg_restore of the schema only dump
=> pg_restore takes 7 hours, which is 99% used for executing the query like: SELECT pg_catalog.lo_unlink('oid');
I used the link option in al my tests, and it takes the timesuse the link option on pg_upgrade
For some reason Postgres creates a new subdirectory for each PG version (I make use of tablespaces for each database in my PG cluster), also with using the link option.
So after some upgrade, it ends in a really mess with directory's?
The use of OID (large objects): it depends on the vendor of the software. I can ask the vendor to change to another type .... but honestly I don't believe it will changed in the near feature.Searching on this subject turns up some posts about slow restore of large objects under much older versions of PG--not sure if any of it still applies. Finally given the earlier confusion between text and large objects, your apparent belief that text columns correlated to large objects, and that text could hold more data than varchar, it's worth asking: do you actually need large objects at all? (Is this even under your control?)
Database is 95GB, so not so big ;-) but have ~25miljon large objects in it.