Hi, You might try these steps 1. Do a vacuum full analyze 2. Reindex the index on id column 3. Cluster the table based on this index On 12/5/05, Assaf Yaari <assafy@xxxxxxxxxxxx> wrote: > > Hi, > > I'm using PostgreSQL 8.0.3 on Linux RedHat WS 3.0. > > My application updates counters in DB. I left a test over the night that > increased counter of specific record. After night running (several hundreds > of thousands updates), I found out that the time spent on UPDATE increased > to be more than 1.5 second (at the beginning it was less than 10ms)! Issuing > VACUUM ANALYZE and even reboot didn't seemed to solve the problem. > > I succeeded to re-produce this with a simple test: > > I created a very simple table that looks like that: > CREATE TABLE test1 > ( > id int8 NOT NULL, > counter int8 NOT NULL DEFAULT 0, > CONSTRAINT "Test1_pkey" PRIMARY KEY (id) > ) ; > > I've inserted 15 entries and wrote a script that increase the counter of > specific record over and over. The SQL command looks like this: > UPDATE test1 SET counter=number WHERE id=10; > > At the beginning the UPDATE time was around 15ms. After ~90000 updates, the > execution time increased to be more than 120ms. > > 1. What is the reason for this phenomena? > 2. Is there anything that can be done in order to improve this? > > Thanks, > Assaf -- Regards Pandu