Search Postgresql Archives

Re: Determining size of a database before dumping

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jeff Davis <pgsql@xxxxxxxxxxx> writes:
> On Tue, 2006-10-03 at 00:42 +0200, Alexander Staubo wrote:
>> Why does pg_dump serialize data less efficiently than PostgreSQL when  
>> using the "custom" format?

> What you're saying is more theoretical. If pg_dump used specialized
> compression based on the data type of the columns, and everything was
> optimal, you're correct. There's no situation in which the dump *must*
> be bigger. However, since there is no practical demand for such
> compression, and it would be a lot of work ...

There are several reasons for not being overly tense about the pg_dump
format:

* We don't have infinite manpower

* Cross-version and cross-platform portability of the dump files is
  critical

* The more complicated it is, the more chance for bugs, which you'd
  possibly not notice until you *really needed* that dump.

In practice, pushing the data through gzip gets most of the potential
win, for a very small fraction of the effort it would take to have a
smart custom compression mechanism.

			regards, tom lane


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux