Search Postgresql Archives

Re: pg_dump out of memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/04/2018 12:31 AM, David Rowley wrote:
On 4 July 2018 at 14:43, Andy Colson <andy@xxxxxxxxxxxxxxx> wrote:
I moved a physical box to a VM, and set its memory to 1Gig.  Everything
runs fine except one backup:


/pub/backup# pg_dump -Fc -U postgres -f wildfire.backup wildfirep

g_dump: Dumping the contents of table "ofrrds" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR:  out of memory
DETAIL:  Failed on request of size 1073741823.> pg_dump: The command was: COPY public.ofrrds (id, updateddate, bytes) TO
stdout;

There will be less memory pressure on the server if the pg_dump was
performed from another host. When running pg_dump locally the 290MB
bytea value will be allocated in both the backend process pg_dump is
using and pg_dump itself. Running the backup remotely won't require
the latter to be allocated on the server.

I've been reducing my memory settings:

maintenance_work_mem = 80MB
work_mem = 5MB
shared_buffers = 200MB

You may also get it to work by reducing shared_buffers further.
work_mem won't have any affect, neither will maintenance_work_mem.

Failing that, the suggestions of more RAM and/or swap look good.


Adding more ram to the vm is the simplest option.  I just seems a waste cuz of one backup.

Thanks all.

-Andy




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux