Craig Ringer <craig@xxxxxxxxxxxxxxxxxxxxx> writes: > Personally I'd expect that to only evaluate once. It's saying "where > f_name.counter in this row is equal to some single random value > generated at the start of the query". The parameters of the random() > function do not depend on the input, so Pg evaluates it once rather than > with each row. No, that's not the model that we use. In my view, the semantics specified by the SQL standard are that SELECT ... FROM a, b, c WHERE ... are that you notionally form the Cartesian product of a X b X c and then evaluate the WHERE condition at each row of the product to find out which rows make it into the SELECT result. Obviously, the DBMS is going to try like heck to optimize the execution of that model, but its optimizations should not change the results. We're not perfect about that, in a number of ways --- in particular, we allow short-circuit evaluation of ANDed conditions, which means that we might not bother to execute a volatile function for some row of the Cartesian product if we're able to exclude that row from the result based on other WHERE conditions. The OP's complaint is a bit different from this, in that (if I understand it correctly) he doesn't want the WHERE condition to be dropped down to the individual relation scan, because that amounts to improperly assuming that the volatile WHERE condition will produce the same result at every row of the Cartesian product that included that row from the base relation. This gripe isn't without foundation. But I'm not seeing what properties you get from that that wouldn't be broken by AND-shortcircuit evaluation. Dropping the AND-shortcircuit optimization would mean that we revert to the brain-dead "form the Cartesian product and then filter" approach if even one volatile function appears anywhere in the WHERE conditions. I doubt that either the OP or anyone else wants that. regards, tom lane