Tim Uckun wrote:
I have a pretty simple query on a pretty simple table with about 60
million records in it.
This is the query.
SELECT * FROM "changes" WHERE (id > 1935759 and company_id = 4 and
source_model_name = 'CommissionedVisit') ORDER BY id ASC LIMIT 1
The id field is the primary key. The other fields are indexed
(company_id and source_model_name).
This query takes about 30 seconds to run on a pretty beefy machine.
Here is the explain.
"Limit (cost=0.00..7.46 rows=1 width=45) (actual
time=28799.712..28799.712 rows=0 loops=1)"
" -> Index Scan using changes_pkey on changes
(cost=0.00..2331939.52 rows=312519 width=45) (actual
time=28799.710..28799.710 rows=0 loops=1)"
" Index Cond: (id > 1935759)"
" Filter: ((company_id = 4) AND ((source_model_name)::text =
'CommissionedVisit'::text))"
"Total runtime: 28799.749 ms"
It seem to me that it's ignoring the indexes on the text fields. Is
that right?
It probably thinks the id check is going to be better to limit the
result set.
How many records are there for id > 1935759 ?
vs
How many records for company_id = 4 and source_model_name =
'CommissionedVisit' ?
If this is a common query you could probably do a multi-column index on
all 3 columns (id, company_id, source_model_name) - but if company_id
and source_model_name have a low number of distinct values, then it's
not going to help.
--
Postgresql & php tutorials
http://www.designmagick.com/
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general