Hi list, I need to check some complex unique constraint before insert/update to table X so I use a before trigger to do this. If I don't use any explicit lock I get a race condition if two inserts happen very quickly because an insert changes the result of the condition that is checked in the trigger. So I add a LOCK TABLE X (exclusive mode) in the first line of the trigger which leads to a deadlock because a shared lock is obtained for insert automatically by PostgreSQL before the trigger code executes. So we have 2 processes - one for each insert operation, and here is what happens (as far as I understand it): 1. each process obtains this shared lock for X table, 2. the first process that reaches the code in trigger can't obtain the exclusive lock for X because the table is already blocked by the second process, so the first process waits for the lock, 3. the second process reaches the lock in trigger and a deadlock is detected by PostgreSQL and this process releases the shared lock on Samples and fails, 4. the first process continues because the second process released its lock. As a result one process (the first one) succeeds but first we get a deadlock exception which I would have to handle in the application somehow but this isn't that simple to do. The simplest solution to prevent the deadlock would be to acquire an exclusive lock instead of a shared one before the trigger executes but there doesn't seem to be a way to do this in PostgreSQL, does it? I don't want to solve the problem in the application but on the DB level. Thank you in advance for any help, Peter -- View this message in context: http://www.nabble.com/deadlock-in-trigger-before-insert-tp22881336p22881336.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general