Hey All,
I previously post about the troubles I was having dumping a >1Tb (size
with indexes) table. The rows in the table could be very large. Using
perl's DBD::Pg we were some how able to add these very large rows
without running in to the >1Gb row bug. With everyones help I
determined I needed to get move the largest rows elsewhere. This
seemed to solve that problem but a new problem has cropped up.
When I ran pg_dump again, it ran successfully without error. Although
there were no errors, pg_dump dumped less then half of the rows that
actually exist in the table. When examining the dump file (I did not
dump in -F c format) the copy statement created by the dump is
terminated correctly (with a \.) and there are indeed 300+ million rows
in the file as opposed to the 700+ million I was expecting. I don't
believe I specified anything that would have caused pg_dump to dump a
truncated version of the table. The last row successfully dumped
contains only normal ascii characters and is not particularly big in
size. The row immediately after the last row successfully dumped
contains an installer file (.bin) stored in a bytea cell. It is about
138 Mb in size.
I've also been having troubles recreating this situation on a smaller DB.
We are dumping the table using this command.
/var/lib/pgsql-8.3.5/bin/pg_dump -O -x -t large_table mydb | gzip -c
> large_table.pgsql.gz
The stats of the db server is as follows,
Processors: 4x Opteron 2.4 Ghz cores
Memory: 16 GB
Disks: 42x 15K SCSI 146 GB disks.
Thanks again for or your help,
Ted
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance