Alberto Dalmaso <dalmaso@xxxxxxxxxx> wrote: > P.S.: to understand what the query has to make (and 80% of the view > hve these to make): a lot of time is spend to pivoting a table with > a structure like > identifier, description_of_value, numeric value > that has to be transformed in > identifier, description_1, description_2, ..., description_n > where n is not a fixed number (it changes in function of the type of > calculation that was used to generate the rows in the table). > > perhaps this information could help. What would help more is the actual query, if that can be shared. It leaves a lot less to the imagination than descriptions of it. There are a couple things which have been requested which would help pin down the reason the optimizer is not getting to a good plan, so that it can be allowed to do a good job. As Tom said, this would be a much more productive focus than casting about for ways to force it to do what you think is the best thing. (Maybe, given the chance, it can come up with a plan which runs in seconds, rather than over the 24 minutes you've gotten.) With all the optimizer options on, and the from_collapse_limit and join_collapse_limit values both set to 100, run an EXPLAIN (no ANALYZE) on your big problem query. Let us know how long the EXPLAIN runs. If it gets any errors, copy and paste all available information. (General descriptions aren't likely to get us very far.) Since EXPLAIN without ANALYZE only *plans* the query, but doesn't run it, it should not take long to do this. If there are any views or custom functions involved, showing those along with the query source would be good. If we get this information, we have a much better chance to find the real problem and get it fixed. -Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance