That's the thing thats kinda blowing my mind here, when I look at
that table:
db1=# select count(*) from _my_cluster.sl_log_1 ;
count
-------
6788
(1 row)
As far as my DB is concerned, there's only ~7000 rows (on average)
when I look
in there (it does fluctuate, I've seen it go as high as around 12k,
but then its
gone back down, so I know events are moving around in there).
So from what I can tell- from the disk point of view, there's ~11Gb
of data; from the
vacuum point of view there's 309318 rows. From the psql point of
view, there's only
around 7,000. Am I missing something? Unless there's something
going on under the
hood that I don't know about (more than likely), it seems like my
sl_log_1 table is munged or
somehow otherwise very screwed up. I fear that a re-shuffling or
dropping/recreating
the index will mess it up further. Maybe when I take my production
systems down for
maintenance, can I wait until sl_log_1 clears out, so then I can just
drop that
table altogether (and re-create it of course)?
Thanks!
/kurt
On Jun 19, 2007, at 5:33 PM, Tom Lane wrote:
Kurt Overberg <kurt@xxxxxxxxxxxxxxxxx> writes:
mydb # vacuum verbose _my_cluster.sl_log_1 ;
INFO: "sl_log_1": found 455001 removable, 309318 nonremovable row
versions in 13764 pages
DETAIL: 0 dead row versions cannot be removed yet.
Hmm. So you don't have a long-running-transactions problem (else that
DETAIL number would have been large). What you do have is a failure
to vacuum sl_log_1 on a regular basis (because there are so many
dead/removable rows). I suspect also some sort of Slony problem,
because AFAIK a properly operating Slony system shouldn't have that
many live rows in sl_log_1 either --- don't they all represent
as-yet-unpropagated events? I'm no Slony expert though. You probably
should ask about that on the Slony lists.
...I then checked the disk and those pages are still there.
Yes, regular VACUUM doesn't try very hard to shorten the disk file.
Would a VACUUM FULL take care of this?
It would, but it will take an unpleasantly long time with so many live
rows to reshuffle. I'd advise first working to see if you can get the
table down to a few live rows. Then a VACUUM FULL will be a snap.
Also, you might want to do REINDEX after VACUUM FULL to compress the
indexes --- VACUUM FULL isn't good at that.
regards, tom lane
---------------------------(end of
broadcast)---------------------------
TIP 4: Have you searched our list archives?
http://archives.postgresql.org