Hi,
does no one have an idea?
It may be a rare case doing the same UPDATE a thousand times. But I´m really interested why this is not happening when doing DIFFERENT updates. And, of course, if something could be done on the database side to prevent this behavior in case some application developer does the same “mistake” again.
Thanks
Jan
From: Jan Strube
Sent: Tuesday, February 10, 2015 12:03 PM
To: 'pgsql-general@xxxxxxxxxxxxxx'
Subject: Performance slowing down when doing same UPDATE many times
Hi,
we recently found a bug in one of our applications which was doing exactly the same UPDATE operation a few thousand times inside a transaction. This caused the UPDATEs to become slower and slower from some milliseconds to some seconds. We already fixed the application but I am wondering if this might be a PostgreSQL bug, too.
Here is a simple test case that performs and benchmarks 100,000 UPDATEs (benchmarking only every 10,000th to reduce output):
BEGIN;
CREATE TEMP TABLE test (id integer PRIMARY KEY, flag boolean DEFAULT false);
INSERT INTO test (id) SELECT generate_series(1, 100000);
DO $$
DECLARE
s timestamp;
e timestamp;
BEGIN
FOR i IN 1..100000 LOOP
SELECT clock_timestamp() INTO s;
UPDATE test SET flag = true WHERE id = 12345;
SELECT clock_timestamp() INTO e;
IF i%10000 = 0 THEN
RAISE NOTICE '%', e-s;
END IF;
END LOOP;
END $$;
ROLLBACK;
The output looks like this:
NOTICE: 00:00:00.000525
NOTICE: 00:00:00.000992
NOTICE: 00:00:00.001404
NOTICE: 00:00:00.001936
NOTICE: 00:00:00.002374
NOTICE: 00:00:00.002925
NOTICE: 00:00:00.003525
NOTICE: 00:00:00.004015
NOTICE: 00:00:00.00453
NOTICE: 00:00:00.004976
The problem only occurs inside a transaction and if the same dataset is updated. I´m using PostgreSQL 9.1.15.
Jan
Hi
it is side effect of MVCC implementation of Postgres. There is not possible vacuum inside open transaction.2015-03-09 8:47 GMT+01:00 Jan Strube <js@xxxxxxxxx>: