Mladen Gogala <mladen.gogala@xxxxxxxxxxx> wrote: > create a definitive bias toward one type of the execution plan. We're talking about trying to support the exact opposite. This all started because a database which was tuned for good response time for relatively small queries against a "hot" portion of some tables chose a bad plan for a weekend maintenance run against the full tables. We're talking about the possibility of adapting the cost factors based on table sizes as compared to available cache, to more accurately model the impact of needing to do actual disk I/O for such queries. This also is very different from trying to adapt queries to what happens to be currently in cache. As already discussed on a recent thread, the instability in plans and the failure to get to an effective cache set make that a bad idea. The idea discussed here would maintain a stable plan for a given query, it would just help choose a good plan based on the likely level of caching. -Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance