Maybe reconsider your expectation. Note: Every “update” have to “select” before modifying data. Even if the page is in memory, there still work…reading ,acquiring lock, modifying and request to write to disk. Regards, Tobi > On 13 Feb 2023, at 18:48, Laurenz Albe <laurenz.albe@xxxxxxxxxxx> wrote: > > On Mon, 2023-02-13 at 16:09 +0100, Mkrtchyan, Tigran wrote: >> Typically we expect that UPDATE is a slow operation in PostgreSQL, however, >> there are cases where it's hard to understand why. In particular, I have a table like >> >> ``` >> CREATE SEQUENCE t_inodes_inumber_seq >> START WITH 1 >> INCREMENT BY 1 >> NO MINVALUE >> NO MAXVALUE >> CACHE 1; >> >> >> CREATE TABLE t_inodes ( >> inumber bigint PRIMARY KEY, >> icrtime timestamp with time zone NOT NULL, >> igeneration bigint NOT NULL >> ); >> ``` >> >> and a transaction that inserts and update an entry in that table: >> >> ``` >> BEGIN; >> INSERT INTO t_inodes (inumber, icrtime, igeneration) >> VALUES (nextval('t_inodes_inumber_seq'), now(), 0) RETURNING inumber \gset >> >> UPDATE t_inodes SET igeneration = igeneration + 1 where inumber = :inumber; >> END; >> ``` >> >> The pgbench shows the following result: >> >> ``` >> $ pgbench -h localhost -n -r -f update.sql -t 10000 -c 64 -j 64 testdb >> pgbench (15.0 (Debian 15.0-1.pgdg110+1)) >> transaction type: update.sql >> scaling factor: 1 >> query mode: simple >> number of clients: 64 >> number of threads: 64 >> maximum number of tries: 1 >> number of transactions per client: 10000 >> number of transactions actually processed: 640000/640000 >> number of failed transactions: 0 (0.000%) >> latency average = 11.559 ms >> initial connection time = 86.038 ms >> tps = 5536.736898 (without initial connection time) >> statement latencies in milliseconds and failures: >> 0.524 0 BEGIN; >> 0.819 0 INSERT INTO t_inodes (inumber, icrtime, igeneration) >> 0.962 0 UPDATE t_inodes SET igeneration = igeneration + 1 where inumber = :inumber; >> 9.203 0 END; >> ``` >> >> My naive expectation will be that updating the newly inserted record should cost nothing... Are there ways >> to make it less expensive? > > Updating a newly inserted row is about as expensive as inserting the row in the first place. > > You can reduce the overall impact somewhat by creating the table with a "fillfactor" below > 100, in your case 90 would probably be enough. That won't speed up the UPDATE itself, but > it should greatly reduce the need for VACUUM. > > Yours, > Laurenz Albe > >