I tested it by visual inspection of advisory locks in pg_locks; once with a small test table, and once on a larger 'operations' table in our test environment. It seemed to work, but I hear you, I don't like to depend on the mood of the optimizer. The drawback of the subquery version is that if the queue table gets big (eg: consumers are shut off) it will spend a lot of time building a worktable only to get a few items from it later in the outer query. Perhaps putting a limit in the inner query as well can alleviate that, like: LIMIT (expected number of consumers) * (batch size) There is a theoretical race condition with this approach (fast vs slow consumers) but it's harmless - a consumer that executes the inner query slowly may get less or no items to process, although there might be some, but they will picked up eventually, so it's no big deal. thanks, Kiriakos On Feb 16, 2012, at 12:44 AM, Tom Lane wrote: > Kiriakos Georgiou <kg.postgresql@xxxxxxxxxxxxxx> writes: >> As I understand it the order the of evaluation of search arguments is up to the optimizer. I've tested the following query, that is supposed to take advantage of advisory locks to skip over rows that are locked by other consumers running the exact same query and it seems to work fine. It seems to me the optimizer should always prefer to scan by status. What say you? > > "When it breaks, you get to keep both pieces." Was your testing even > capable of noticing the problem if the query locked more rows than you > wanted? > > Less dangerous coding practices might involve putting the lock function > in an outer query, while using an OFFSET 0 in the sub-query as an > optimization fence. > > regards, tom lane > > -- > Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general