"Kevin Grittner" <Kevin.Grittner@xxxxxxxxxxxx> writes: > Jori Jovanovich <jori@xxxxxxxxxxxxxxxx> wrote: >> what is the recommended way to solve this? > The recommended way is to adjust your costing configuration to > better reflect your environment. Actually, it's probably not the costs so much as the row estimates. For instance, that first query was estimated to select 20 out of a possible 24 rows. If 24 is indeed the right number of matches, then the planner is right and the OP is wrong: the indexscan is going to have to traverse almost all of the table and therefore it will be a lot slower than seqscan + sort. Now, if the real number of matches is a lot more than that, then the indexscan would make sense because it could be expected to get stopped by the LIMIT before it has to traverse too much of the table. So the true problem is to get the rowcount estimate to line up with reality. Unfortunately the estimates for ~* are typically not very good. If you could convert that to plain ~ (case sensitive) it'd probably work better. Also, if this isn't a particularly modern version of Postgres, a newer version might do a bit better with the estimate. regards, tom lane -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance