Charles Martin wrote: > >So where is the server located relative to the pg_dump client? > >On the same machine? > >If so is it a virtual machine e.g AWS? > >Across a local or remote network? > > > I gave the command in a terminal session after SSHing to the server from > the same network. It is not a virtual machine. That means that pg_dump runs on the same machine as the server. It's plausible that, with only 4GB of RAM, the table that fails to dump has some very large rows that can't be allocated, especially since both the backend and pg_dump need to have it simultaneously in memory. > pg_dump: The command was: COPY public.docfile (docfile_pkey, > docfileoriginalname, ordernumber, versionnum, docfilecontents, > docfilepath, docfileextension, enddatetime, endby, editnum, insby, > insdatetime, modby, moddatetime, active, doc_fkey) TO stdout; The "docfilecontents" column suggests that it might contain large contents. If its type is bytea, it's going to be expanded to twice its size to build the hex representation. You may get a sense on how big is the biggest row expressed as text with this query: SELECT max(length(contents.*::text)) FROM public.docfile; If it's big enough that it might cause the OOM issue, try to run pg_dump remotely through an SSH tunnel [1], which you can already do in terms of network permissions since you log in with SSH, so pg_dump itself does not use any memory on the server. Also, if the machine doesn't have swap space, it might be that just adding a few GB's of swap would make the operation succeed. [1] https://www.postgresql.org/docs/current/static/ssh-tunnels.html Best regards, -- Daniel Vérité PostgreSQL-powered mailer: http://www.manitou-mail.org Twitter: @DanielVerite