Hi,
Many thanks for your answer.
So: not possible to have very little downtime if you have a database
with al lot rows containing text as datatype, as pg_upgrade needs 12hr
for 24 milj rows in pg_largeobject.
Testing now with pg_dumpall en pg_restore ....
I think, postgresql should take this in high priority to resolve this
problem.
I have to make a choice in the near future: Postgres or Oracle, and that
database would have a lot of datatype text.
Database would have 1 TB.
It seems me a little bit tricky/dangerous to use Postgres, just for
being able to upgrade to a newer version.
Kind regards.
On 20/07/2023 13:43, Ilya Kosmodemiansky wrote:
Hi Jef,
On Thu, Jul 20, 2023 at 1:23 PM Jef Mortelle <jefmortelle@xxxxxxxxx> wrote:
Looking at the dump file: man many lines like SELECT
pg_catalog.lo_unlink('100000');
I have the same issue with /usr/lib/postgresql15/bin/pg_upgrade -v -p
5431 -P 5432 -k
Whats going on ?
pg_upgrade is known to be problematic with large objects.
Please take a look here to start with:
https://www.postgresql.org/message-id/20210309200819.GO2021%40telsasoft.com
Kind regards