I am not an expert here,
but if you can make use of pg_try_advisory_lock to ensure the same rows in the same function are not modified by two different processes.Basically, try to get an advisory lock for the rows you want to make change, but if not able to get a lock, return.
postgres=# begin;
BEGIN
postgres=*# SELECT pg_try_advisory_lock(id) from t where id < 10;
pg_try_advisory_lock
----------------------
f
f
f
f
f
f
f
f
f
(9 rows)
postgres=*#
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
postgres=# begin
postgres-# ;
BEGIN
postgres=*# SELECT pg_try_advisory_lock(id) from t where id < 10;
pg_try_advisory_lock
----------------------
t
t
t
t
t
t
t
t
t
(9 rows)
On Wed, 2 Jun 2021 at 11:52, gerry gan <xiang.gan.aalto@xxxxxxxxx> wrote:
Hi,Our story is as follows. We have a function called Foo() which internally will call many other sub-functions. These sub-functions have complicated if..else.. conditions. Calling function Foo() will start a transaction. Now the scenario is that when two processes call Foo() at the same time, the two transactions will deadlock. I have extracted some log below.---Log 1---2021-05-11 12:03:03 UTC testdb postgres ERROR: deadlock detected2021-05-11 12:03:03 UTC testdb postgres DETAIL: Process 3390 waits for ShareLock on transaction 18569288; blocked by process 29031.Process 29031 waits for ShareLock on transaction 18569301; blocked by process 3390.Process 3390: delete from records where id = '759476540'Process 29031: update records set content='foo bar' where id = 121496691 and type = 'xyz'---End of Log 1------Log 2---2021-05-11 19:22:05 UTC testdb postgres ERROR: deadlock detected2021-05-11 19:22:05 UTC testdb postgres DETAIL: Process 21865 waits for ShareLock on transaction 18574374; blocked by process 21873.Process 21873 waits for ShareLock on transaction 18574373; blocked by process 21865.Process 21865: update records set content='foo abc' where id = 759698419 and type = 'xyz'Process 21873: update records set content='foo def' where id = 686728333 and type = 'xyz'---End of Log 2---Based on the log, the deadlock happens to the table records. I have read some articles about this kind of deadlock. Mostly, the suggested solution is to make a consistent (deterministic) ordering of the commands in the transaction so that they will not block each other. I just wonder whether this can be applied in our case. As shown in above two logs, our function actually goes to different branches in the function based on user input data. The deadlock then occurs at different command operations(Log 1 blocked at delete/update operations while log 2 blocked at two updates). My question is whether it is feasible to make consistent command ordering in our case based on the superficial phenomenon shown in the above two logs? Is explicit table lock more applicable in this case? For example, just lock table records at the beginning of the transaction so that deadlock can be avoided. But performance might be hurt with the explicit table lock. Any insight about this issue is highly appreciated and thank you in advance!--Best regards,Gerry
Thanks,
Vijay
Mumbai, India