Tyler Reese <jukey91@xxxxxxxxx> wrote: > I don't understand why the performance of case 4 is so much slower >case 4: >mydb=> explain analyze SELECT * FROM "cdr" WHERE lower("CallingPartyNumber") = '9725551212' order by "key" limit 100; > Limit (cost=0.00..72882.05 rows=100 width=757) (actual time=20481.083..30464.960 rows=11 loops=1) > -> Index Scan using cdr_pkey on cdr (cost=0.00..30036152.32 rows=41212 width=757) (actual time=20481.049..30464.686 rows=11 loops=1) > Filter: (lower("CallingPartyNumber") = '9725551212'::text) > Total runtime: 30465.246 ms It thinks that it will only need to read 1/412th of the table to find 100 matching rows, and using that index it will be able to skip the sort. Since there aren't 100 matching rows, it has to read the whole table through the index. Raising the statistics target and running ANALYZE might allow it to use a more accurate estimate, and thereby make a better choice. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general