Op 01-09-11 14:43, Scott Marlowe schreef:
On Thu, Sep 1, 2011 at 6:38 AM, Rik Bellens<rik.bellens@xxxxxxxxxxxxxx> wrote:
Op 01-09-11 14:22, Scott Marlowe schreef:
Yeah, could be. Take a look at this page:
http://wiki.postgresql.org/wiki/Show_database_bloat and see if the
query there sheds some light on your situ.
thanks for this answer
if i run the query, I get 12433752064 wasted bytes on stats_count_pkey, so I
suppose that is the reason
Also look into installing something like nagios and the
check_postgresql.pl plugin to keep track of these things before they
get out of hand.
csb time: Back in the day when pg 6.5.3 and 7.0 was new and
interesting, I had a table that was 80k or so, and an index that was
about 100M. Back when dual core machines were servers, and 1G ram was
an extravagance. I had a process that deleted everything from the
table each night and replaced it, and the index was so huge that
lookups were taking something like 10 seconds each. A simple drop /
create index fixed it right up. The check_postgresql.pl script is a
god sent tool to keep your db healthy and happy.
after running reindex on the stats_count_pkey, the disk size of this
index was reduced with about 2 gigs, but the size of the table itself
was still very large.
running 'vacuum full' also reduced the table size from 14 gigs to about
2 gigs.
I will now regullarly check the database with the mentioned tools and
queries.
thank you for the very usefull tips
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general