Thanks Tom. Next question (and sorry if this is an ignorant one)...how would I go about doing that? - Chris THIS MESSAGE IS INTENDED FOR THE USE OF THE PERSON TO WHOM IT IS ADDRESSED. IT MAY CONTAIN INFORMATION THAT IS PRIVILEGED, CONFIDENTIAL AND EXEMPT FROM DISCLOSURE UNDER APPLICABLE LAW. If you are not the intended recipient, your use of this message for any purpose is strictly prohibited. If you have received this communication in error, please delete the message and notify the sender so that we may correct our records. -----Original Message----- From: Tom Lane [mailto:tgl@xxxxxxxxxxxxx] Sent: Friday, August 21, 2009 11:07 AM To: Chris Hopkins Cc: pgsql-general@xxxxxxxxxxxxxx Subject: Re: Out of memory on pg_dump "Chris Hopkins" <chopkins@xxxxxxx> writes: > 2009-08-19 22:35:42 ERROR: out of memory > 2009-08-19 22:35:42 DETAIL: Failed on request of size 536870912. > Is there an easy way to give pg_dump more memory? That isn't pg_dump that's out of memory --- it's a backend-side message. Unless you've got extremely wide fields in this table, I would bet on this really being a corrupted-data situation --- that is, there's some datum in the table whose length word has been corrupted into a very large value. You can try to isolate and delete the corrupted row(s). regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general