On 12/6/2005 4:08 AM, Assaf Yaari wrote:
Thanks Bruno,
Issuing VACUUM FULL seems not to have influence on the time.
I've added to my script VACUUM ANALYZE every 100 UPDATE's and run the
test again (on different record) and the time still increase.
I think he meant
- run VACUUM FULL once,
- adjust FSM settings to database size and turnover ratio
- run VACUUM ANALYZE more frequent from there on.
Jan
Any other ideas?
Thanks,
Assaf.
-----Original Message-----
From: Bruno Wolff III [mailto:bruno@xxxxxxxx]
Sent: Monday, December 05, 2005 10:36 PM
To: Assaf Yaari
Cc: pgsql-performance@xxxxxxxxxxxxxx
Subject: Re: Performance degradation after successive UPDATE's
On Mon, Dec 05, 2005 at 19:05:01 +0200,
Assaf Yaari <assafy@xxxxxxxxxxxx> wrote:
> Hi,
>
> I'm using PostgreSQL 8.0.3 on Linux RedHat WS 3.0.
>
> My application updates counters in DB. I left a test over the night
> that increased counter of specific record. After night running
> (several hundreds of thousands updates), I found out that the time
> spent on UPDATE increased to be more than 1.5 second (at
the beginning
> it was less than 10ms)! Issuing VACUUM ANALYZE and even
reboot didn't
> seemed to solve the problem.
You need to be running vacuum more often to get rid of the
deleted rows (update is essentially insert + delete). Once
you get too many, plain vacuum won't be able to clean them up
without raising the value you use for FSM. By now the table
is really bloated and you probably want to use vacuum full on it.
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck@xxxxxxxxx #