"Yngve Nysaeter Pettersen" <yngve@xxxxxxxxx> writes: > To avoid having the processes trample each other's queries (the first > attempt was to select the first matching entries of the table, which > caused one to block all other transactions), one of the steps I took was > to select a set of idle rows at a random offset into the table from the > project, mark them for update, then update each record's state as started. > SELECT record_id FROM queue WHERE project_id = my_project AND state = > idle LIMIT n OFFSET i FOR UPDATE > At present "n" is 100-150, "i" is a random value in the range 0-10000. > There is, intentionally, no ordering specified, since that would just slow > down the query, and is not necessary. This seems like a pretty bad design. There are recognized ways to solve this problem with more predictability and much less chance of different processes blocking each other. In particular, this query seems be based on some untenable assumptions about the physical row order being stable. > What I've discovered when using Postgres 9.0 is that the processes are now > blocking every other query into this table, In 9.0, LIMIT/OFFSET processing is done after FOR UPDATE locking, which means that rows skipped over by OFFSET still get locked, which means that different sessions executing this query are now practically certain to block each other, rather than just likely to block each other. This was an intentional change to improve the predictability of FOR UPDATE's interactions with LIMIT/OFFSET, and indeed it's improved the predictability of the behavior for you, just not in the direction you'd like :-( regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general