It might happen because of the type of
data you have ( binary images). The compression for binary files is notorious horrible
since there is a small chance of occurrence of same chars
In other words it is possible since during
compression there are additional chars added for checksums and redundancy
This would normally happen on small binary
files, though.
-----Original Message-----
From:
pgsql-admin-owner@xxxxxxxxxxxxxx [mailto:pgsql-admin-owner@xxxxxxxxxxxxxx] On Behalf Of Nicola Mauri
Sent: Wednesday, June
21, 2006 10:30 AM
To: pgsql-admin@xxxxxxxxxxxxxx
Subject: [ADMIN] Dump size bigger
than pgdata size?
[sorry if this was previously asked: list searches seem
to be down]
I'm
using pg_dump to take a full backup of my database using a compressed format:
$ pg_dump -Fc my_db > /backup/my_db.dmp
It
produces a 6 GB file whereas the pgdata uses only 5 GB of disk space:
$ ls -l /backup
-rw-r--r-- 6592715242 my_db.dmp
$ du -b /data
5372269196 /data
How
could it be?
As
far as I know, dumps should be smaller than filesystem datafile since they do
not store indexes, etc.
Database
contains about one-hundred-thousands binary images, some of which may be
already compressed. So i tried the --compress=0 option but this produces a dump
that does not fit on my disk (more than 11 GB).
I'm
using postgres 8.1.2 on RHEL4.
So,
what can I do to diagnose the problem?
Thanks
in advance,
Nicola