Mario Splivalo <mario.splivalo@xxxxxxxxx> writes: > For the purpose of the application I need to establish some form of > serialization, therefore I use FOR UPDATE. The query, inside the > function, is like this: > pulitzer2=# explain analyze select id FROM messages JOIN > ticketing_codes_played ON id = message_id WHERE service_id = 1102 AND > receiving_time BETWEEN '2006-03-01' AND '2006-06-30' FOR UPDATE; > Hash Join (cost=32131.04..34281.86 rows=627 width=16) (actual > time=742.806..1491.864 rows=58005 loops=1) ^^^^^ > Now, this query takes between 8 and 30 seconds, wich is a lot, since > during the day we have almost 20 requests per minute. Acquiring a row lock separately for each of 58000 rows is not going to be a cheap operation. Especially not if anyone else is locking any of the same rows and thereby blocking you. If there is concurrent locking, you're also running a big risk of deadlock because two processes might try to lock the same rows in different orders. Are you really intending to update all 58000 rows? If not, what is the serialization requirement exactly (ie, what are you trying to accomplish)? Seems like something about this app needs to be redesigned. regards, tom lane