On 11/10/18 2:46 PM, Ron wrote:
On 11/09/2018 05:49 PM, Jean-Marc Lessard wrote:
I am running PostgreSQL 9.6.5 on x86_64-pc-mingw64, compiled by
gcc.exe (Rev5, Built by MSYS2 project) 4.9.2, 64-bit
on win2012 with 12Gb RAM
The dumped table is 0.5TB, 17 million rows and LOB uses about 99% of
the space.
The pg_dump consumes the entire system memory and swap, then
terminates with out of memory error
Is it a bug or normal behavior?
If I do not include LOB in the dump, it works fine.
Here is the dump output:
C:\Users\Administrator> pg_dump -h localhost -Fc -a -b -t signatures
-v > d:\postgresql\sig.dmp
pg_dump: last built-in OID is 16383
...
pg_dump: reading row security enabled for table "ibisl1.signatures"
pg_dump: reading policies for table "ibisl1.signatures"
pg_dump: reading large objects
pg_dump: reading dependency data
pg_dump: saving encoding = UTF8
pg_dump: saving standard_conforming_strings = on
out of memory
This looks similar to the recent thread "Trouble Upgrading Postgres".
https://www.postgresql.org/message-id/flat/CAFw6%3DU2oz9rTF0qa0LFMg91bu%3Dhdisfu2-xXU1%3D%3DD7yBif%2B2uw%40mail.gmail.com
Specifically, message
ce239c9c-68f2-43e6-a6b6-81c66d0f46e5@xxxxxxxxxxxxxxxx
<https://www.postgresql.org/message-id/ce239c9c-68f2-43e6-a6b6-81c66d0f46e5%40manitou-mail.org>
"The hex expansion performed by COPY must allocate twice that size,
plus the rest of the row, and if that resulting size is above 1GB, it
will error out with the message you mentioned upthread:
ERROR: invalid memory alloc request size <some value over 1 billion>.
So there's no way it can deal with the contents over 500MB, and the
ones just under that limit may also be problematic."
I don't this is the case. The above is an issue because of the maximum
length of a string that Postgres can process. LO's are different creatures:
https://www.postgresql.org/docs/11/lo-implementation.html
It would help to see the memory configuration values set for the cluster:
https://www.postgresql.org/docs/11/lo-implementation.html
And message 89b5b622-4c79-4c95-9ad4-b16d0d0daf9b@xxxxxxxxxxxxxxxx
"It's undoubtedly very annoying that a database can end up with
non-pg_dump'able contents, but it's not an easy problem to solve. Some
time ago, work was done to extend the 1GB limit but eventually it got
scratched. The thread in [1] discusses many details of the problem and
why the proposed solution were mostly a band aid. Basically, the specs
of COPY and other internal aspects of Postgres are from the 32-bit era
when putting the size of an entire CDROM in a single row/column was not
anticipated as a valid use case. It's still a narrow use case today and
applications that need to store big pieces of data like that should
slice them in chunks, a bit like in pg_largeobject, except in much
larger chunks, like 1MB.
[1] pg_dump / copy bugs with "big lines" ?
https://www.postgresql.org/message-id/1836813.YmyOrS99PX%40ronan.dunklau.fr
<https://www.postgresql.org/message-id/1836813.YmyOrS99PX@xxxxxxxxxxxxxxxx>
"
*Jean-Marc Lessard*
Administrateur de base de données / Database Administrator
Ultra Electronics Forensic Technology Inc.
*T* +1 514 489 4247 x4164
www.ultra-forensictechnology.com <http://www.ultra-forensictechnology.com>
--
Angular momentum makes the world go 'round.
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx