Re: Troubles dumping a very large table.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



600mb measured by get_octet_length on data.  If there is a better way to measure the row/cell size, please let me know because we thought it was the >1Gb problem too.  We thought we were being conservative by getting rid of the larger rows but I guess we need to get rid of even more.

Thanks,
Ted
________________________________________
From: Tom Lane [tgl@xxxxxxxxxxxxx]
Sent: Wednesday, December 24, 2008 12:49 PM
To: Ted Allen
Cc: pgsql-performance@xxxxxxxxxxxxxx
Subject: Re:  Troubles dumping a very large table.

Ted Allen <tallen@xxxxxxxxxxxxxxxxxxxxx> writes:
> during the upgrade.  The trouble is, when I dump the largest table,
> which is 1.1 Tb with indexes, I keep getting the following error at the
> same point in the dump.

> pg_dump: SQL command failed
> pg_dump: Error message from server: ERROR:  invalid string enlargement
> request size 1
> pg_dump: The command was: COPY public.large_table (id, data) TO stdout;

> As you can see, the table is two columns, one column is an integer, and
> the other is bytea.   Each cell in the data column can be as large as
> 600mb (we had bigger rows as well but we thought they were the source of
> the trouble and moved them elsewhere to be dealt with separately.)

600mb measured how?  I have a feeling the problem is that the value
exceeds 1Gb when converted to text form...

                        regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux