Cheaper subquery scan not considered unless offset 0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello everyone,

Please consider the following three semantically equivalent, but differently written queries:

Query A:

SELECT * FROM items a INNER JOIN (
  SELECT item, sum(amount) stock FROM stocktransactions GROUP BY item HAVING sum(amount) >= 1
) b ON b.item = a. "ID"

Query B:

SELECT * FROM items a INNER JOIN (
  SELECT item, sum(amount) stock FROM stocktransactions GROUP BY item
) b ON b.item = a. "ID" WHERE b.stock >= 1

Query C:

SELECT * FROM items a INNER JOIN (
  SELECT item, sum(amount) stock FROM stocktransactions b GROUP BY item OFFSET 0
) b ON b.item = a. "ID" WHERE b.stock >= 1

FYI: stocktransactions.item and stocktransactions.amount have not null constraints and stocktransactions.item is a foreign key referencing items.ID, the primary key of items.

Queries A + B generate the same plan and execute as follows:

Merge Join  (cost=34935.30..51701.59 rows=22285 width=344) (actual time=463.824..659.553 rows=15521 loops=1)
  Merge Cond: (a."ID" = b.item)
  ->  Index Scan using "PK_items_ID" on items a  (cost=0.42..15592.23 rows=336083 width=332) (actual time=0.012..153.899 rows=336064 loops=1)
  ->  Sort  (cost=34934.87..34990.59 rows=22285 width=12) (actual time=463.677..466.146 rows=15521 loops=1)
        Sort Key: b.item
        Sort Method: quicksort  Memory: 1112kB
        ->  Finalize HashAggregate  (cost=32879.78..33102.62 rows=22285 width=12) (actual time=450.724..458.667 rows=15521 loops=1)
              Group Key: b.item
              Filter: (sum(b.amount) >= '1'::double precision)
              Rows Removed by Filter: 48277
              ->  Gather  (cost=27865.65..32545.50 rows=44570 width=12) (actual time=343.715..407.243 rows=162152 loops=1)
                    Workers Planned: 2
                    Workers Launched: 2
                    ->  Partial HashAggregate  (cost=26865.65..27088.50 rows=22285 width=12) (actual time=336.416..348.105 rows=54051 loops=3)
                          Group Key: b.item
                          ->  Parallel Seq Scan on stocktransactions b  (cost=0.00..23281.60 rows=716810 width=12) (actual time=0.015..170.646 rows=579563 loops=3)
Planning time: 0.277 ms
Execution time: 661.342 ms


Plan C though, thanks to the "offset optimization fence", executes the following, more efficient plan:


Nested Loop  (cost=32768.77..41146.56 rows=7428 width=344) (actual time=456.611..525.395 rows=15521 loops=1 total=525.395)
  ->  Subquery Scan on c  (cost=32768.35..33269.76 rows=7428 width=12) (actual time=456.591..475.204 rows=15521 loops=1 total=475.204)
        Filter: (c.stock >= '1'::double precision)
        Rows Removed by Filter: 48277
        ->  Finalize HashAggregate  (cost=32768.35..32991.20 rows=22285 width=12) (actual time=456.582..468.124 rows=63798 loops=1 total=468.124)
              Group Key: b.item
              ->  Gather  (cost=27865.65..32545.50 rows=44570 width=12) (actual time=348.479..415.463 rows=162085 loops=1 total=415.463)
                    Workers Planned: 2
                    Workers Launched: 2
                    ->  Partial HashAggregate  (cost=26865.65..27088.50 rows=22285 width=12) (actual time=343.952..355.912 rows=54028 loops=3 total=1067.736)
                          Group Key: b.item
                          ->  Parallel Seq Scan on stocktransactions b  (cost=0.00..23281.60 rows=716810 width=12) (actual time=0.015..172.235 rows=579563 loops=3 total=516.705)
  ->  Index Scan using "PK_items_ID" on items a  (cost=0.42..1.05 rows=1 width=332) (actual time=0.003..0.003 rows=1 loops=15521 total=46.563)
        Index Cond: ("ID" = c.item)
Planning time: 0.223 ms
Execution time: 526.203 ms


I'm wondering, given that Query C's plan has lower overall costs than Query A/B's, why wouldn't the planner choose to execute that plan for queries A+B as well?
It has lower projected startup cost as well as lower total cost so apparently the optimzer does not consider such a plan with a subquery scan at all (otherwise it would choose it based on the lower cost estimates, right?) unless one forces it to via OFFSET 0.

Though I wouldn't necessarily consider this a bug, it is an issue that one has to explicitly work around with inadvisable optimization fences and it would be great if this could be fixed.

Thanks to the developer community for delivering this great product, I hope this helps in enhancing it.

Cheers,

Benjamin

-- 

Bejamin Coutu
ben.coutu@xxxxxxxxx

ZeyOS, Inc.
http://www.zeyos.com



-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance




[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux