On Mon, Sep 10, 2012 at 5:27 PM, Mike Christensen <mike@xxxxxxxxxxxxx> wrote: > Is there something that can be done smarter with this error message? > > > pg_dump: dumping contents of table pages > pg_dump: [tar archiver] archive member too large for tar format > pg_dump: *** aborted because of error Maybe it could tell you what the maximum allowed length is, for future reference. > > If there's any hard limits (like memory, or RAM) that can be checked > before it spends two hours downloading the data, There is no efficient way for it to know for certain in advance how much space the data will take, until it has seen the data. Perhaps it could make an estimate, but that could suffer from both false positives and false negatives. The docs for pg_dump do mention it has a 8GB limit for individual tables. I don't see how much more than that warning can reasonably be done. It looks like it dumps an entire table to a temp file first, so I guess it could throw the error at the point the temp file exceeds that size, rather than waiting for the table to be completely dumped and then attempted to be added to the archive. But that would break modularity some, and still you could have dumped 300 7.5GB tables before getting to that 8.5GB one which causes the error. Cheers, Jeff -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general