Clive Page wrote:
I have a table cov3 of about 3 million rows, with a B-tree index on an
integer column called hpix. If I do a simple select on this column it
works in milliseconds, using the index naturally:
select * from cov3 where hpixint = 482787587;
hpix | expos | hpixint
-----------+---------+-----------
482787587 | 30529.6 | 482787587
(1 row)
This doesn't show any index being used. EXPLAIN ANALYSE would have.
The problem is that I want to use a user-defined function called healpix
which returns a single integer value in my queries; the function details
are unlikely to be relevant (it selects a pixel from a celestial
position), but its definition is:
\df healpix
List of functions
Schema | Name | Result data type | Argument data types
--------+---------+------------------+------------------------------------
public | healpix | integer | double precision, double precision
select * from cov3 where hpix = healpix(2.85,-11.48);
but it takes ages. An EXPLAIN shows why, it insists upon a sequential
scan:
explain select * from cov3 where hpix = healpix(2.85,-11.48);
QUERY PLAN
--------------------------------------------------------------------------------------
Seq Scan on cov3 (cost=0.00..93046.81 rows=1 width=20)
Filter: (hpix = (healpix(2.85::double precision, -11.48::double
precision))::text)
Does anyone have any idea why, or know how I can restore adequate
performance?
Do you understand the difference between the IMMUTABLE,STABLE,VOLATILE
attributes for functions and what the difference between them is?
http://www.postgresql.org/docs/8.1/static/sql-createfunction.html
However, in the example above the real problem is that the query using
an index tests against "hpixint" whereas your function compares against
"hpix". Make sure you're testing against the same column, then post back.
--
Richard Huxton
Archonet Ltd