On Sun, Dec 18, 2022 at 06:29:41PM +0100, Frits Jalvingh wrote: > By itself I'm used to bad query performance in Postgresql; our application > only does bulk queries and Postgres quite often makes terrible plans for > those, but with set enable_nestloop=false set always most of them at least > execute. The remaining failing queries are almost 100% caused by bad join > sequences; I plan to work around those by forcing the join order from our > application. For instance, the exact same query above can also generate the > following plan (this one was created by manually setting > join_collapse_limit = 1, but fast variants also occur quite often when > disabling parallelism): I, too, ended up setting enable_nestloop=false for our report queries, to avoid the worst-case plans. But you should also try to address the rowcount misestimates. This underestimates the rowcount by a factor of 69 (or 138 in the plan you sent today): | (soort = 'MSL'::text) AND (code = 'DAE'::text) If those conditions are correlated, you can improve the estimate by adding extended stats object. | CREATE STATISTICS s_h_sturingslabel_ssm_stats soort,code FROM s_h_sturingslabel_ssm; ANALYZE s_h_sturingslabel_ssm; Unfortunately, stats objects currently only improve scans, and not joins, so that might *improve* some queries, but it won't resolve the worst problems: | Hash Join (cost=22,832.23..44,190.21 rows=185 width=47) (actual time=159.725..2,645,634.918 rows=28,086,472,886 loops=1) Maybe you can improve that by adjusting the stats target or ndistinct... -- Justin