Dimitrios Apostolou <jimis@xxxxxxx> writes: > Further digging into this simple query, if I force the non-parallel plan > by setting max_parallel_workers_per_gather TO 0, I see that the query > planner comes up with a cost much higher: > Limit (cost=363.84..1134528847.47 rows=10 width=4) > -> Unique (cost=363.84..22690570036.41 rows=200 width=4) > -> Append (cost=363.84..22527480551.58 rows=65235793929 width=4) > ... > The total cost on the 1st line (cost=363.84..1134528847.47) has a much > higher upper limit than the total cost when > max_parallel_workers_per_gather is 4 (cost=853891608.79..853891608.99). > This explains the planner's choice. But I wonder why the cost estimation > is so far away from reality. I'd say the blame lies with that (probably-default) estimate of just 200 distinct rows. That means the planner expects to have to read about 5% (10/200) of the tables to get the result, and that's making fast-start plans look bad. Possibly an explicit ANALYZE on the partitioned table would help. regards, tom lane