> =?utf-8?Q?Martin_Povolny?= <martin.povolny@xxxxxxxxx> writes:
>> I had 5 databases, 4 dumped ok, the 5th, the largest failed dumping: I
>> was unable to
>> make a dump in the default 'tar' format. I got this message:
>> pg_dump: [tar archiver] archive member too large for tar format
>
> This is expected: tar format has a documented limit of 8GB per table.
> (BTW, tar is not the "default" nor the recommended format, in part
> because of that limitation. The custom format is preferred unless
> you really *need* to manipulate the dump files with "tar" for some
> reason.)
Ok, I get it. Don't use the 'tar' format. I will not.
As to hitting the limit of 8 GB per table -- I have one really large table.
But if I dump the table separetely, I get:
pg_dump --verbose --host localhost --username bb --create --format tar --file archiv5-process.dump --table process archiv5
-rw-r--r-- 1 root root 4879763968 2010-10-27 10:15 archiv5-process.dump
in other words: I am sure I did not hit the 8GB per table limit. But I am over 4GB per table.
The 'process' table is the largest and is also the one where restore fails in both cases (tar format and custom format).
>
>> for the bb.dump in the 'custom' format:
>> pg_restore: [vlastnÃÂ archivÃÂÃÂ] unexpected end of file
>
> Hm, that's weird. I can't think of any explanation other than the dump
> file somehow getting corrupted. Do you get sane-looking output if you
> run "pg_restore -l bb.dump"?
Sure, I did pg_restore -l into a file and did not get any errors.
Then I commented out the already restored files and then tried restoring tables behind the table 'process'.
But I got the same error message :-(
like this:
$ /usr/lib/postgresql/8.4/bin/pg_restore -l bb.dump > bb.list
# then edit bb.list, commenting out lines before and including table 'process', saving into bb.list-post-process
$ /usr/lib/postgresql/8.4/bin/pg_restore --verbose --use-list bb.list-post-process bb.dump > bb-list-restore.sql
pg_restore: restoring data for table "process_internet"
pg_restore: [custom archiver] unexpected end of file
pg_restore: *** aborted because of error
As to splitting the dump as suggested earlier in this thread -- I am sure my system can work with files over 4 GB also I don't understand how spliting the output from pg_dump would prevent the pg_dump from failing. But I can try that too.
Also I did not try the '-F plain' dump format.
I have stopped using the plain format in the
past because I was getting output as if I used --inserts atlhough I did not and I don't see any option for pg_dump, that would force the use of COPY for dumping data. But that is several versions of postgres back and I did not try this since that time.
Many thanks for your time and tips!
--
Mgr. Martin PovolnÃ, soLNet, s.r.o.,
+420777714458, martin.povolny@xxxxxxxxx
Mgr. Martin PovolnÃ, soLNet, s.r.o.,
+420777714458, martin.povolny@xxxxxxxxx