On Tue, 8 May 2007, Heikki Linnakangas wrote:
Pomarede Nicolas wrote:
On Tue, 8 May 2007, Heikki Linnakangas wrote:
Pomarede Nicolas wrote:
But for the data (dead rows), even running a vacuum analyze every day is
not enough, and doesn't truncate some empty pages at the end, so the data
size remains in the order of 200-300 MB, when only a few effective rows
are there.
For a table like that you should run VACUUM much more often than once a
day. Turn on autovacuum, or set up a cron script etc. to run it every 15
minutes or so.
Yes, I already do this on another spool table ; I run a vacuum after
processing it, but I wondered if there was another way to keep the disk
size low for this table.
How much concurrent activity is there in the database? Running a vacuum right
after processing it would not remove the deleted tuples if there's another
transaction running at the same time. Running the vacuum a few minutes later
might help with that. You should run VACUUM VERBOSE to see how many
non-removable dead tuples there is.
There's not too much simultaneous transaction on the database, most of the
time it shouldn't exceed one minute (worst case). Except, as I need to run
a vacuum analyze on the whole database every day, it now takes 8 hours to
do the vacuum (I changed vacuum values to be a little slower instead of
taking too much i/o and making the base unusable, because with
default vacuum values it takes 3-4 hours of high i/o usage (total base
is 20 GB) ).
So, at this time, the complete vacuum is running, and vacuuming only the
spool table gives all dead rows are currently not removable (which is
normal).
I will run it again later when the complete vacuum is over, to see how
pages are affected.
Nicolas