On Oct 4, 2013, at 13:03 , Kevin Grittner <kgrittn@xxxxxxxxx> wrote: > That is not a valid assumption. For one thing, the default > transaction isolation level is read committed, and at that > isolation level you are not guaranteed to even get the same *rows* > running the same query twice within the same transaction, much less > in the same order. I guess I should have mentioned that we are using serializable snapshot isolation (thanks for that, BTW!) > if there is already a sequential > scan in progress for another process, the new one will start at the > point the other one is at, and "wrap around". This can save a lot > of physical disk access, resulting in better performance. OH! This totally, totally makes sense. This is *exactly* the kind of thing I was looking for, and I'll bet that is exactly what was happening in our case. The table is pretty small, so Postgres explain says it is doing a full table scan for this query. Thanks for the speedy insightful answer! This is yet another example of something that when tracking down the bug, we knew immediately it was incorrect and probably wrong, but sometimes you don't notice these things the first time. The joys of software. Evan -- Work: https://www.mitro.co/ Personal: http://evanjones.ca/ -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general