Tom Lane schrieb am 20.06.2018 um 16:03: >> Consider the following execution plan: >> ... >> -> Aggregate (cost=26.87..26.87 rows=1 width=32) (actual time=0.012..0.012 rows=1 loops=700000) >> -> Bitmap Heap Scan on orders o2 (cost=3.45..26.85 rows=8 width=8) (actual time=0.004..0.008 rows=8 loops=700000) >> -> Bitmap Index Scan on orders_customer_id_order_date_idx (cost=0.00..3.45 rows=8 width=0) (actual time=0.003..0.003 rows=8 loops=700000) > >> My expectation would have been that the "Aggregate" step shows the actual time as a product of the number of loops. > > No, that looks fine to me. The rule of thumb for reading this is total > time spent in/below this node is "actual time" times "number of loops". OK, if that is the rule I can live with that ;) > It seems a bit odd that the Agg node would account for a third of the > total execution time when it's only processing 8 rows on average ... > but maybe it's a really expensive aggregate. But it's processing those 8 rows 700.000 times - so the total time seems correct. FWIW, the query looks like this: select customer_id, amount, sales_person_id from orders o1 where amount = (select max(o2.amount) from orders o2 where o2.customer_id = o1.customer_id); It's not a real world query - it's just there to illustrate the drawbacks of co-related sub-queries. > Another thought is that the EXPLAIN ANALYZE instrumentation itself > can account for significant per-node-invocation overhead. If the > total execution time drops significantly when you add "timing off" > to the EXPLAIN options, then that's probably a factor in making > the Agg node look relatively expensive. "timing off" doesn't really change the execution time (it's about 60ms faster without) Thanks for the answer, I am not really concerned about the query performance itself, just about the plan ;) Thomas