To me, the worst catch-22 we face in this area is that we'd like the
optimizer's choices of plan to be stable and understandable, but the
real-world costs of queries depend enormously on short-term conditions
such as how much of the table has been sucked into RAM recently by
other queries. I have no good answer to that one.
Yeah, there is currently no way to tell the optimizer things like :
- this table/portion of a table is not frequently accessed, so it won't
be in the cache, so please use low-seek plans (like bitmap index scan)
- this table/portion of a table is used all the time so high-seek-count
plans can be used like index scan or nested loops since everything is in
RAM
Except planner hints (argh) I see no way to give this information to the
machine... since it's mostly in the mind of the DBA. Maybe a per-table
"cache temperature" param (hot, warm, cold), but what about the log table,
the end of which is cached, but not the old records ? It's messy.
Still PG does a pretty excellent job most of the time.