Andreas Joseph Krogh <andreas@xxxxxxxxxx> writes: > I wonder; In general, is there any downside of having join_collapse_limit = 14 > on modern hardware (32 cores, 64GB RAM), and geqo_threshold=16 ? > I'm aware of it increasing planning-time, but is this really an issue in > practice? It can be. The number of possible join orders is exponential in the join size (at least 2^n, maybe worse, don't recall at the moment). There are heuristics in the planner that usually let it avoid investigating large parts of that space; but the only thing that will certainly limit the problem is preventing large join subproblems from being considered in the first place --- which is what join_collapse_limit/from_collapse_limit are for. Depending on what your queries are like, you might not ever hit any of the bad cases, so it might be fine. But I'd be nervous about throwing a wide mix of complex queries at a planner set with high thresholds. If you've got just one problem query, it might be worth your time to take note of the optimal join order (as seen in EXPLAIN when a good plan is chosen) and rearrange the query so that the syntactic join order matches that. Then you should get the same plan even when join_collapse_limit is small. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general