On Mon, Sep 10, 2012 at 9:57 PM, Tom Lane <tgl@xxxxxxxxxxxxx> wrote: > Jeff Janes <jeff.janes@xxxxxxxxx> writes: >> On Mon, Sep 10, 2012 at 5:27 PM, Mike Christensen <mike@xxxxxxxxxxxxx> wrote: >>> Is there something that can be done smarter with this error message? >>> >>> pg_dump: dumping contents of table pages >>> pg_dump: [tar archiver] archive member too large for tar format >>> pg_dump: *** aborted because of error > >> There is no efficient way for it to know for certain in advance how >> much space the data will take, until it has seen the data. Perhaps it >> could make an estimate, but that could suffer from both false >> positives and false negatives. > > Maybe the docs should warn people away from tar format more vigorously. > Unless you actually have a reason to disassemble the archive with tar, > that format has no redeeming social value that I can see, and it > definitely has gotchas. (This isn't the only one, IIRC.) Gotcha. I ended up just using "plain" format which worked well, even though the file was about 60 gigs and I had to clear out some hard disk space first. Is the TAR format just the raw SQL commands, just tar'ed and then sent over the wire? It'd be cool if there was some compressed "binary" backup of a database that could be easily downloaded, or even better, a way to just move an entire database between server instances in one go.. Maybe there is a tool that does that, I just don't know about it :) Anyway, I'm all upgraded to 9.2. Decided I might as well since I'm launching my site in 3 weeks, and won't get another chance to upgrade for a while.. Mike -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general