Hi! > -----Ursprüngliche Nachricht----- > Von: Tom Lane [mailto:tgl@xxxxxxxxxxxxx] > Gesendet: Sonntag, 4. Dezember 2005 19:32 > An: Markus Wollny > Cc: pgsql-performance@xxxxxxxxxxxxxx > Betreff: Re: [PERFORM] Queries taking ages in PG 8.1, have > been much faster in PG<=8.0 > The data is not quite the same, right? I notice different > numbers of rows being returned. No, you're right, I didn't manage to restore the 8.1 dump into the 8.0.3 cluster, so I took the quick route and restored the last dump from my 8.0 installation. The numbers should be roughly within the same range, though: Table answer has got 8,646,320 rows (counted and estimated, as this db is not live, obviously), table participant has got 173,998 rows; for comparison: The production db had an estimated 8,872,130, counted 8,876,648 rows for table answer, and estimated 178,165, counted 178,248 rows for participant. As the numbers are a mere 2% apart, I should think that this wouldn't make that much difference. > It seems that checking question_id/value via the index, > rather than directly on the fetched tuple, is a net loss > here. It looks like 8.1 would have made the right plan > choice if it had made a better estimate of the combined > selectivity of the question_id and value conditions, so > ultimately this is another manifestation of the lack of > cross-column statistics. What I find interesting though is > that the plain index scan in 8.0 is so enormously cheaper > than it's estimated to be. Perhaps the answer table in your > 8.0 installation is almost perfectly ordered by session_id? Not quite - there may be several concurrent sessions at any one time, but ordinarily the answers for one session-id would be quite close together, in a lot of cases even in perfect sequence, so "almost perfectly" might be a fair description, depending on the exact definition of "almost" :) > Are you using default values for the planner cost parameters? I have to admit that I did tune the random_page_cost and effective_cache_size settings ages ago (7.1-ish) to a value that seemed to work best then - and didn't touch it ever since, although my data pool has grown quite a bit over time. cpu_tuple_cost, cpu_index_tuple_cost and cpu_operator_cost are using default values. > It looks like reducing random_page_cost would help bring the > planner estimates into line with reality on your machines. I had set random_page_cost to 1.4 already, so I doubt that it would do much good to further reduce the value - reading the docs and the suggestions for tuning I would have thought that I should actually consider increasing this value a bit, as not all of my data will fit in memory any more. Do you nevertheless want me to try what happens if I reduce random_page_cost even further? Kind regards Markus