Huan Ruan wrote: > Kevin Grittner wrote: >> With a low cache hit rate, that would generally be when the number >> of lookups into the table exceeds about 10% of the table's rows. > > So far, my main performance issue comes down to this pattern where > Postgres chooses hash join that's slower than a nest loop indexed join. By > changing those cost parameters, this query works as expected now, but there > are others fall into the same category and appear to be harder to convince > the optimiser. > > I'm still a bit worried about this query as Postgres gets the record count > right, and knows the index is a primary key index, therefore it knows it's > 0.05m out of 170m records (0.03%) but still chooses the sequential scan. > Hopefully this is just related to that big index penalty bug introduced in > 9.2. Quite possibly, but it could be any of a number of other things, like a type mismatch. It might be best to rule out other causes. If you post the new query and EXPLAIN ANALYZE output, along with the settings you have now adopted, someone may be able to spot something. It wouldn't hurt to repeat OS and hardware info with it so people have it handy for reference. -Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance