Markus Schaber <schabi@xxxxxxxxxxxx> writes: > An easy first approach would be to add a user tunable cache probability > value to each index (and possibly table) between 0 and 1. Then simply > multiply random_page_cost with (1-that value) for each scan. That's not the way you'd need to use it. But on reflection I do think there's some merit in a "cache probability" parameter, ranging from zero (giving current planner behavior) to one (causing the planner to assume everything is already in cache from prior queries). We'd have to look at exactly how such an assumption should affect the cost equations ... regards, tom lane