Joao Ferreira gmail wrote:
On Mon, 2008-08-11 at 12:45 -0400, Greg Smith wrote:
On Mon, 11 Aug 2008, Joao Ferreira gmail wrote:
I'm finding it very strange that my pg takes 9Giga on disk but
pg_dumpall produces a 250Mega dump. 'VACUUM FULL' was executed
yesterday.
If you've been running VACUUM FULL, it's probably so-called "index bloat".
Try running the query at http://wiki.postgresql.org/wiki/Disk_Usage to
figure out where all your space has gone inside the database.
egbert=# SELECT nspname || '.' || relname AS "relation",
egbert-# pg_size_pretty(pg_relation_size(nspname || '.' || relname))
AS "size"
egbert-# FROM pg_class C
egbert-# LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
egbert-# WHERE nspname NOT IN ('pg_catalog', 'information_schema')
egbert-# AND nspname !~ '^pg_toast'
egbert-# AND pg_relation_size(nspname || '.' || relname)>0
egbert-# ORDER BY pg_relation_size(nspname || '.' || relname) DESC
egbert-# LIMIT 20;
relation | size
----------------------------------+---------
public.timeslots_strs_var_ts_key | 5643 MB #this is a UNIQUE clause
public.timeslots | 2660 MB #this is the only table
public.timeslots_timestamp_index | 583 MB #this is an index
public.timeslots_var_index | 314 MB #this is an index
public.timeslots_timeslot_index | 275 MB "this is an index
(5 rows)
so it seems that the UNIQUE clause is taking up more space than the data
itself...
stil I have 2660 MB of data but the dump is about 10x smaller !!!
any hints ?
I would try running a cluster on the table. This will usually clean up
things and free diskspace both in the table and the indexes.
It does require quite extensive locking though, so might not be an
option if you can't afford having the database unavailable for a few
(10-15) minutes.
--
Tommy Gildseth