Tom Lane writes:
What's more, because the line and field buffers are StringInfos that are intended for reuse across multiple lines/fields, they're not simply made equal to the exact size of the big field. They're rounded up to the next power-of-2, ie, if you've read an 84MB field during the current COPY IN then they'll be 128MB apiece. In short, COPY is going to need 508MB of process-local RAM to handle this row.
Of shared memory? I am a little confused,yesterday you said that increasing shared_buffers may be counterproductive. Or you are referring to the OS size?
The OS size is 1.6GB, but today I am going to try increasing kern.maxssiz. Vivek recommended increasing it
In short, you need a bigger per-process memory allowance.
I wrote a mini python program to copy one of the records that is failing. The client program is using 475MB with 429MB resident. The server has been running all night on this single insert. The server is using 977MB with 491MB resident. Yesterday I saw it grow as big as 1000MB with 900MB+ resident.
BTW: I think if you were using different client and server encodings there would be yet a sixth large buffer involved, for the output of pg_client_to_server.
Using default encoding. The log file keeps growing with simmilar messages. Put subset of log at http://public.natserv.net/postgresql-2007-06-19.txt However the process doesn't crash.