I wrote: > It looks to me like the planner thinks that order by a and order by b > are equivalent because the expressions are equal(); hence it discards > what it thinks is a redundant second sort step. > ... What's the use-case for sorting by a volatile > expression in the first place? It may be worth pointing out that there are related gotchas without bothering with anything as complicated as a sub-select. Consider select random() from foo order by 1; select random() from foo order by random(); Are these the same, or not? If you experiment you'll find out that Postgres treats them the same --- random() is evaluated only once per row of foo, and you get output that is sorted. Arguably for the second case there should be two evaluations of random() per row, and you should get output that appears randomly ordered (because the sort key and the output value will be uncorrelated). If you do select random() from foo order by random()+1; then you do get two evaluations and random-looking output. I'd be the first to admit that these various behaviors "just grew" rather than being intentionally designed; no one has been thinking about volatility in sort keys. The question remains whether it is worth expending development effort and planning cycles to have a more consistent definition. What's the use-case? regards, tom lane