> On Jul 20, 2023, at 11:05 AM, Jef Mortelle <jefmortelle@xxxxxxxxx> wrote: > > so, yes pg_ugrade start a pg_dump session, Only for the schema, which you can see in the output you posted. > Server is a VM server, my VM has 64GB SuseSLES attached to a SAN with SSD disk (Hp3Par) VM + SAN can perform well, or introduce all sorts of issues: busy neighbor, poor VM drivers, SAN only fast for large sequential writes, etc. > On Jul 20, 2023, at 11:22 AM, Ron <ronljohnsonjr@xxxxxxxxx> wrote: > > Note also that there's a known issue with pg_upgrade and millions of Large Objects (not bytea or text, but lo_* columns). Good to know, but it would be weird to have millions of large objects in a 1TB database. (Then again, I found an old post about 3M large objects taking 5.5GB...) Try: time a run of that pg_dump command, then time a run of pg_restore of the schema only dump time a file copy of the db to a location on the SAN--purpose is not to produce a usable backup, but rather to check IO throughput use the link option on pg_upgrade Searching on this subject turns up some posts about slow restore of large objects under much older versions of PG--not sure if any of it still applies. Finally given the earlier confusion between text and large objects, your apparent belief that text columns correlated to large objects, and that text could hold more data than varchar, it's worth asking: do you actually need large objects at all? (Is this even under your control?)