Hi Mats,
- The row estimate is off, as you possibly noticed. This can be possibly solved by raising `default_statistics_target` to e.g. 2500 (we typically use that) and run ANALYZE
- I however think that the misestimate might be caused by the evt_tx_hash being of type bytea. I believe that PG cannot estimate this very well for JOINs and will rather pick row numbers too low. Hence the nested loop is picked and there might be no way
around this. I have experienced similar things when applying JOINs on VARCHAR with e.g. more than 3 fields for comparison.
- Other things to look into:
- work_mem seems too low to me with 56MB, consider raising this to the GB range to avoid disk-based operations
- min_parallel_table_scan_size - try 0
- parallel_setup_cost (default 1000, maybe try 500)
- parallel_tuple_cost (default 1.0, maybe try 0.1)
- random_page_cost (as mentioned consider raising this maybe much higher, factor 10 or sth like this) or (typically) seq_page_cost can be possibly much lower (0.1, 0.01) depending on your storage
I hope this helps to get to a parallel plan without setting `nested_loop = off`. EXPLAIN should be enough already to see the difference.
Best,
Sebastian
--
Sebastian Dressler, Solution Architect +49 30 994 0496 72 | sebastian@xxxxxxxxxxx Swarm64 AS Parkveien 41 B | 0258 Oslo | Norway Registered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787 CEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender (Styrets Leder): Dr. Sverre Munck Swarm64 AS Zweigstelle Hive Ullsteinstr. 120 | 12109 Berlin | Germany Registered at Amtsgericht Charlottenburg - HRB 154382 B |