Thanks Bruno, Issuing VACUUM FULL seems not to have influence on the time. I've added to my script VACUUM ANALYZE every 100 UPDATE's and run the test again (on different record) and the time still increase. Any other ideas? Thanks, Assaf. > -----Original Message----- > From: Bruno Wolff III [mailto:bruno@xxxxxxxx] > Sent: Monday, December 05, 2005 10:36 PM > To: Assaf Yaari > Cc: pgsql-performance@xxxxxxxxxxxxxx > Subject: Re: Performance degradation after successive UPDATE's > > On Mon, Dec 05, 2005 at 19:05:01 +0200, > Assaf Yaari <assafy@xxxxxxxxxxxx> wrote: > > Hi, > > > > I'm using PostgreSQL 8.0.3 on Linux RedHat WS 3.0. > > > > My application updates counters in DB. I left a test over the night > > that increased counter of specific record. After night running > > (several hundreds of thousands updates), I found out that the time > > spent on UPDATE increased to be more than 1.5 second (at > the beginning > > it was less than 10ms)! Issuing VACUUM ANALYZE and even > reboot didn't > > seemed to solve the problem. > > You need to be running vacuum more often to get rid of the > deleted rows (update is essentially insert + delete). Once > you get too many, plain vacuum won't be able to clean them up > without raising the value you use for FSM. By now the table > is really bloated and you probably want to use vacuum full on it. >