Search Postgresql Archives

Re: SELECT DISTINCT chooses parallel seqscan instead of indexscan on huge table with 1000 partitions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Fri, 10 May 2024, Tom Lane wrote:

Dimitrios Apostolou <jimis@xxxxxxx> writes:
Further digging into this simple query, if I force the non-parallel plan
by setting max_parallel_workers_per_gather TO 0, I see that the query
planner comes up with a cost much higher:

  Limit  (cost=363.84..1134528847.47 rows=10 width=4)
    ->  Unique  (cost=363.84..22690570036.41 rows=200 width=4)
          ->  Append  (cost=363.84..22527480551.58 rows=65235793929 width=4)
...

The total cost on the 1st line (cost=363.84..1134528847.47) has a much
higher upper limit than the total cost when
max_parallel_workers_per_gather is 4 (cost=853891608.79..853891608.99).
This explains the planner's choice. But I wonder why the cost estimation
is so far away from reality.

I'd say the blame lies with that (probably-default) estimate of
just 200 distinct rows.  That means the planner expects to have
to read about 5% (10/200) of the tables to get the result, and
that's making fast-start plans look bad.

Possibly an explicit ANALYZE on the partitioned table would help.

It took long but if finished:

ANALYZE
Time: 19177398.025 ms (05:19:37.398)

And it made a difference indeed, the serial plan is chosen now:

EXPLAIN SELECT DISTINCT workitem_n FROM test_runs_raw ORDER BY workitem_n DESC LIMIT 10;
 Limit  (cost=364.29..1835512.29 rows=10 width=4)
   ->  Unique  (cost=364.29..22701882164.56 rows=123706 width=4)
         ->  Append  (cost=364.29..22538472401.60 rows=65363905182 width=4)
               ->  Index Only Scan Backward using test_runs_raw__part_max20000k_pkey on test_runs_raw__part_max20000k test_runs_raw_1000  (cost=0.12..2.34 rows=1 width=4)
               ->  Index Only Scan Backward using test_runs_raw__part_max19980k_pkey on test_runs_raw__part_max19980k test_runs_raw_999  (cost=0.12..2.34 rows=1 width=4)
               ->  Index Only Scan Backward using test_runs_raw__part_max19960k_pkey on test_runs_raw__part_max19960k test_runs_raw_998  (cost=0.12..2.34 rows=1 width=4)
[...]
               ->  Index Only Scan Backward using test_runs_raw__part_max12460k_pkey on test_runs_raw__part_max12460k test_runs_raw_623  (cost=0.57..12329614.53 rows=121368496 width=4)
               ->  Index Only Scan Backward using test_runs_raw__part_max12440k_pkey on test_runs_raw__part_max12440k test_runs_raw_622  (cost=0.57..5180832.16 rows=184927264 width=4)
               ->  Index Only Scan Backward using test_runs_raw__part_max12420k_pkey on test_runs_raw__part_max12420k test_runs_raw_621  (cost=0.57..4544964.21 rows=82018824 width=4)
[...]

Overall I think there are two issues that postgres could handle better
here:

1. Avoid the need for manual ANALYZE on partitioned table

2. Create a different parallel plan, one that can exit early, when the
   LIMIT is proportionally low. I feel the partitions could be
   parallel-scanned in-order, so that the whole thing stops when one
   partition has been read.

Thank you!
Dimitris







[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux