Hi,
-Hi.
-I encountered the problem of not being able to upgrade my instance (14->15) via pg_upgrade.
-The utility crashed with an error in out of memory.-
-After researching a bit I found that this happens at the moment of export schema with pg_dump.
-Then I tried to manually perform a dump schema with the parameter --binary-upgrade option and also got an out of memory.
-Digging a little deeper, I discovered quite a large number of blob objects in the database (pg_largeobject 10GB and pg_largeobject_metadata 1GB (31kk rows))
-I was able to reproduce the problem on a clean server by simply putting some random data in pg_largeobject_metadata
-$insert into pg_largeobject_metadata (select i,16390 from generate_series(107659,34274365) as i);
-$pg_dump --binary-upgrade --format=custom -d mydb -s -f tmp.dmp
-and after 1-2 min get out of memory ( i tried on server with 4 and 8 gb RAM)
-Perhaps this is a bug? How can I perform an upgrade?
a quick and dirty solution might be an additional temporary swap space like this https://en.euro-linux.com/blog/creating-a-swap-file-or-how-to-deal-with-a-temporary-memory-shortage/
best,
Anton
Thanks for the answers.
Increasing the RAM helped.
Previously, I estimated that processing 1 million rows from pg_largeobject_metadata with pg_dump requires about 750MB of memory (data from ps - RSS, AlmaLinux 8).
But the running time of the process is frustrating. It took about 40 minutes.
Really hope for fixes in v17
ср, 14 февр. 2024 г. в 10:11, Dischner, Anton <Anton.Dischner@xxxxxxxxxxxxxxxxxxx>: