"David Parker" <dparker@xxxxxxxxxxxxxxxx> writes: > 1) Given that the data is all cached, what can we do to make sure that > postgres is generating the most efficient plans in this case? We have > bumped up effective_cache_size, but it had no effect. If you're willing to bet on everything being in RAM all the time, dropping random_page_cost to 1 would be a theoretically sound thing to do. In any case you should look at reducing it considerably from the default setting of 4. Something that might also be interesting is to try increasing all the cpu_xxx cost factors, on the theory that since the unit of measurement (1 sequential page fetch) relates to an action involving no actual I/O, the relative costs of invoking an operator, etc, should be rated higher than when you expect actual I/O. I'm not real confident that this would make things better --- you might find that any improvement would be swamped by the low accuracy with which we model CPU costs (such as the assumption that every operator costs the same to evaluate). But it's worth some experimentation. regards, tom lane