> On Jul 20, 2023, at 7:46 AM, Jef Mortelle <jefmortelle@xxxxxxxxx> wrote: > > So: not possible to have very little downtime if you have a database with al lot rows containing text as datatype, as pg_upgrade needs 12hr for 24 milj rows in pg_largeobject. We need to get terminology straight, as at the moment your posts are very confusing. In PostgreSQL large objects and text are not the same. Text is basically varchar without a specified length limit. Large object is a blob (but not what SQL calls a BLOB)--it is kind of like a file stored outside the normal table mechanism, and provides facilities for partial reads, etc: https://www.postgresql.org/docs/15/largeobjects.html. There are a number of ways to wind up with references to large objects all deleted, but the orphaned large objects still in the database. First thing you should do: run lovacuum -n to find out if you have orphaned large objects. If so, start cleaning those up, then see how long pg_upgrade takes. Second, what's your hardware? I really don't see dump & restore of a 1TB database taking 6 hours. > Alsready tried to use --link and --jobs, but you cannot ommit the "select lo_unlink ...." for every rows containing datatype text in your database that the pg_* program creates in the export/dump file. Terminology again, or are you conflating two different issues? pg_upgrade --link does not create a dump file.