Re: Troubles dumping a very large table.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I was hoping use pg_dump and not to have to do a manual dump but if that latest solution (moving rows >300mb elsewhere and dealing with them later) does not work I'll try that.
Thanks everyone.

Merlin Moncure wrote:
On Fri, Dec 26, 2008 at 12:38 PM, Tom Lane <tgl@xxxxxxxxxxxxx> wrote:
Ted Allen <tallen@xxxxxxxxxxxxxxxxxxxxx> writes:
600mb measured by get_octet_length on data.  If there is a better way to measure the row/cell size, please let me know because we thought it was the >1Gb problem too.  We thought we were being conservative by getting rid of the larger rows but I guess we need to get rid of even more.
Yeah, the average expansion of bytea data in COPY format is about 3X :-(
So you need to get the max row length down to around 300mb.  I'm curious
how you got the data in to start with --- were the values assembled on
the server side?

Wouldn't binary style COPY be more forgiving in this regard?  (if so,
the OP might have better luck running COPY BINARY)...

This also goes for libpq traffic..large (>1mb) bytea definately want
to be passed using the binary switch in the protocol.

merlin


--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux