"Matthew Lunnon" <mlunnon@xxxxxxxxxxxxx> writes: > In this case the query takes 6.037 ms to run on 862 and 2.332 to run on 743. The difference between 2ms and 6ms is pretty negligable. A single context switch or disk cache miss could throw the results off by that margin in either direction. But what plan does 7.4.3 come up with if you set enable_hashjoins = off? I'm curious whether it comes up with the same nested loops plan as 8.2 and what cost it says it has. I think you need to find queries which take longer to have any reliable performance comparisons. Note that the configuration parameters here aren't the same at all, it's possible the change of effective_cache_size from 800k to 2GB is what's changing the cost estimation. I seem to recall a change in the arithmetic for calculatin Nested loop costs too which made it more aggressive in estimating cache effectiveness. Incidentally, default_statistics_target=1000 is awfully aggressive. I found in the past that that caused the statistics table to become much larger and much slower to access. It may have caused some statistics to be toasted or it may have just been the sheer volume of data present. It will also make your ANALYZEs take a lot longer. I would suggest trying 100 first and incrementally raising it rather than jumping straight to 1000. And preferably only on the columns which really matter. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's 24x7 Postgres support! ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings