Re: problems with large objects dump

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sergio Gabriel Rodriguez <sgrodriguez@xxxxxxxxx> writes:
> Our production database, postgres 8.4 has an approximate size of 200 GB,
> most of the data are large objects (174 GB), until a few months ago we used
> pg_dump to perform backups, took about 3-4 hours to perform all the
> process. Some time ago the process became interminable, take one or two
> days to process, we noticed that the decay process considerably to startup
> backup of large object, so we had to opt for physical backups.

Hm ... there's been some recent work to reduce O(N^2) behaviors in
pg_dump when there are many objects to dump, but I'm not sure that's
relevant to your situation, because before 9.0 pg_dump didn't treat
blobs as full-fledged database objects.  You wouldn't happen to be
trying to use a 9.0 or later pg_dump would you?  Exactly what 8.4.x
release is this, anyway?

			regards, tom lane


-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux