[ please keep the list cc'd ] =?utf-8?Q?Sylvain_D=C3=A9ve?= <sylvain.deve@xxxxxxxxxxxxxx> writes: > Indeed I removed the important part here... I was including a function definition ("create or replace function ...") in the call too. This was temporary and dirty. After moving the definition of the function to the initialization of the database, it solved everything... Defining the same function multiple times, and I presume more or less at the same time, led to problems. The table update is carried out finally without any problem... Hah, now I can reproduce it: regression=# create or replace function foo(int) returns int as 'select 1' language sql; CREATE FUNCTION regression=# begin; BEGIN regression=*# create or replace function foo(int) returns int as 'select 1' language sql; CREATE FUNCTION ... in another session: regression=# create or replace function foo(int) returns int as 'select 1' language sql; <<blocks>> ... in first session: regression=*# commit; COMMIT and now the second session fails with ERROR: tuple concurrently updated because both transactions are trying to update the same pre-existing row of pg_proc. (If the function didn't exist to start with, then you get "duplicate key value violates unique constraint" instead.) That's basically because internal catalog manipulations don't go to the same lengths as user queries do to handle concurrent-update scenarios nicely. I'm not sure what would be involved in making that better, but I am sure it'd be a lot of work :-( regards, tom lane