At 07:43 25/09/2011, Reuven M. Lerner wrote:
Hi, everyone. Daniel Verite
<mailto:daniel@xxxxxxxxxxxxxxxx><daniel@xxxxxxxxxxxxxxxx> wrote:
It would thus appear that there's a slight edge
for dumping bytea, but nothing
super-amazing. Deleting, however, is still
much faster with bytea than large objects.
The problem you have is with
compression/decompression on large objects. If
you see at it's sizes, you get 680KB for large
objects and 573MB for bytea. Postgresql needs to
decompress them before the dump. Even worse, if
your dump is compressed, postgres decompress each
large object , dump it and recompress. For this
test, switch off compression on large
objects/toast. For long term, perhaps a request
to postgresql hackers to directly dump the
already compressed large objects. The toast maybe
more difficult because there are not only big
size columns, but any column whose* size is
bigger than a threshold (don't remember now, 1-2KB or similar)
* Is it whose the correct word? I hope i have expressed correctly.
EFME
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general