I've done some testing on different versions of postgres.
Unfortunately after the weekend, the problem vanished.
The systems are running as usual and the query finishes in 500 MS.
It must have been an issue with the VMs or the DISKs.
Either way, thank you for your support.
Here are btw. some testing results.
Unfortunately after the weekend, the problem vanished.
The systems are running as usual and the query finishes in 500 MS.
It must have been an issue with the VMs or the DISKs.
Either way, thank you for your support.
Here are btw. some testing results.
----------------------------------------------------Linux PG_13.2 (docker instance)Gather (cost=1000.00..573832.18 rows=486255 width=2567) (actual time=232.444..23682.816 rows=516517 loops=1)Output: column1, .. , column54Workers Planned: 2Workers Launched: 2Buffers: shared hit=15883 read=390758-> Parallel Seq Scan on schema.logtable (cost=0.00..524206.67 rows=202606 width=2567) (actual time=256.462..23522.715 rows=172172 loops=3)Output: column1, .. , column54Filter: ((logtable.archivestatus <= 1) AND ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)))Rows Removed by Filter: 4533459Buffers: shared hit=15883 read=390758Worker 0: actual time=266.613..23529.215 rows=171917 loops=1JIT:Functions: 2Options: Inlining true, Optimization true, Expressions true, Deforming trueTiming: Generation 0.805 ms, Inlining 52.127 ms, Optimization 150.748 ms, Emission 63.482 ms, Total 267.162 msBuffers: shared hit=5354 read=130007Worker 1: actual time=270.921..23527.953 rows=172273 loops=1JIT:Functions: 2Options: Inlining true, Optimization true, Expressions true, Deforming trueTiming: Generation 1.217 ms, Inlining 49.556 ms, Optimization 154.765 ms, Emission 65.153 ms, Total 270.690 msBuffers: shared hit=5162 read=130108Planning Time: 0.356 msJIT:Functions: 6Options: Inlining true, Optimization true, Expressions true, Deforming trueTiming: Generation 3.578 ms, Inlining 106.136 ms, Optimization 443.580 ms, Emission 217.728 ms, Total 771.021 msExecution Time: 23736.150 ms-----------Query Takes 245 MS-----------IndexSize: average 80 MB----------------------------------------------------Windows PG_12.6 (local instance)Gather (cost=1000.00..575262.60 rows=499935 width=2526) (actual time=2.155..2555.388 rows=516517 loops=1)Output: column1, .. , column54Workers Planned: 2Workers Launched: 2Buffers: shared hit=128 read=406517-> Parallel Seq Scan on schema.logtable (cost=0.00..524269.10 rows=208306 width=2526) (actual time=0.651..2469.220 rows=172172 loops=3)Output: column1, .. , column54Filter: ((logtable.archivestatus <= 1) AND ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)))Rows Removed by Filter: 4533459Buffers: shared hit=128 read=406517Worker 0: actual time=0.637..2478.110 rows=172580 loops=1Buffers: shared hit=41 read=135683Worker 1: actual time=0.084..2474.863 rows=173408 loops=1Buffers: shared hit=42 read=135837Planning Time: 0.201 msExecution Time: 2572.065 ms-----------Query Takes 18 MS-----------IndexSize: average 300 MB----------------------------------------------------Windows PG_13.2 (local instance)Gather (cost=1000.00..575680.37 rows=503383 width=2531) (actual time=1.045..2586.700 rows=516517 loops=1)Output: column1, .. , column54Workers Planned: 2Workers Launched: 2Buffers: shared hit=8620 read=398025-> Parallel Seq Scan on schema.logtable (cost=0.00..524342.07 rows=209743 width=2531) (actual time=0.346..2485.163 rows=172172 loops=3)Output: column1, .. , column54Filter: ((logtable.archivestatus <= 1) AND ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)))Rows Removed by Filter: 4533459Buffers: shared hit=8620 read=398025Worker 0: actual time=0.155..2487.411 rows=174277 loops=1Buffers: shared hit=2954 read=133173Worker 1: actual time=0.746..2492.533 rows=173762 loops=1Buffers: shared hit=2813 read=133935Planning Time: 0.154 msExecution Time: 2604.983 ms-----------Query Takes 18 MS-----------IndexSize: average 80 MB----------------------------------------------------Windows PG_12.6 (remote instance)Bitmap Heap Scan on schema.logtable (cost=10326.36..449509.96 rows=530847 width=2540) (actual time=406.235..6770.263 rows=516517 loops=1)Output: column1, .. , column54Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))Filter: (logtable.archivestatus <= 1)Heap Blocks: exact=294098Buffers: shared hit=3632 read=291886-> BitmapOr (cost=10326.36..10326.36 rows=536922 width=0) (actual time=212.117..212.124 rows=0 loops=1)Buffers: shared hit=1420-> Bitmap Index Scan on idx_entrytype (cost=0.00..1196.37 rows=64525 width=0) (actual time=30.677..30.678 rows=65970 loops=1)Index Cond: (logtable.entrytype = 4000)Buffers: shared hit=183-> Bitmap Index Scan on idx_entrytype (cost=0.00..4605.07 rows=249151 width=0) (actual time=110.538..110.539 rows=225283 loops=1)Index Cond: (logtable.entrytype = 4001)Buffers: shared hit=619-> Bitmap Index Scan on idx_entrytype (cost=0.00..4126.79 rows=223247 width=0) (actual time=70.887..70.888 rows=225264 loops=1)Index Cond: (logtable.entrytype = 4002)Buffers: shared hit=618Settings: temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.409 msExecution Time: 7259.515 ms-----------Query Takes 570 MS-----------IndexSize: average 300 MB
Am Sa., 8. Mai 2021 um 14:06 Uhr schrieb Semen Yefimenko <semen.yefimenko@xxxxxxxxx>:
Unless I'm overlooking something obvious one result has 500 000 rows the other 7 000.You are right, it wasn't. I have 2 datasets, one containing 12 mil entries and the other 14 mil entries. I accidentally used the one with 12 mil entries in that table which actually only contains 7000~ entries for that sql query.
For now I have tested the 12.6 Postgres with default values and it finished in 12 seconds. I'll do some thorough testing and let you know once I finish, sorry for the confusion.Am Sa., 8. Mai 2021 um 00:17 Uhr schrieb didier <did447@xxxxxxxxx>:Are you sure you're using the same data det ?Unless I'm overlooking something obvious one result has 500 000 rows the other 7 000.