On 11/25/2011 06:53 AM, Maxim Boguk wrote:
I understand that position. However if assumption: " the definition of ORDER BY --- it happens after computing the select list, according to the SQL standard" is correct, then plans like: postgres=# EXPLAIN ANALYZE SELECT * from test order by _data limit 10 offset 1000; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=2884.19..2913.03 rows=10 width=8) (actual time=3.584..3.620 rows=10 loops=1) -> Index Scan using random_key on test (cost=0.00..2884190.16 rows=1000000 width=8) (actual time=0.103..3.354 rows=1010 loops=1) Total runtime: 3.663 ms (3 rows) should not be used at all.
`LIMIT' and `OFFSET' are explicitly defined to compute only that part of the SELECT list that is required. If they weren't specifically defined with that exception then you'd be right.
LIMIT and OFFSET aren't standard anyway, so Pg can define them to mean whatever is most appropriate. The SQL standard is adding new and (as usual) painfully clumsily worded features that work like LIMIT and OFFSET, but I don't know whether they have the same rules about whether execution of functions can be skipped or not.
And it is not possible to predict in advance where and when you get hit by that problem.
That's the biggest problem with statistics- and heuristics-based query planners in general, but this does seem to be a particularly difficult case.
Setting a cost on the function call that more accurately reflects how expensive it is so PostgreSQL will work harder to avoid calling it might help. See http://www.postgresql.org/docs/current/static/sql-createfunction.html .
-- Craig Ringer -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance