On 4/7/2024 20:25, FREYBURGER Simon (SNCF VOYAGEURS / DIRECTION GENERALE
TGV / DM RMP YIELD MANAGEMENT) wrote:
*My question is : what can I change to get a better index reading speed ?*
What I already tried :
* Setting random_page_cost to prohibitive value (10000000) to force a
bitmap heap scan, because those can be made in parallel. This has
not worked, the optimizer is still doing an index scan on my fact table.
* Change effective_io_concurrency, max_parallel_workers_per_gather,
work_mem to much higher values.
I'm not sure the case is only about speed of index scan. Just see into
slow Index clause:
fth.\"ID_TRAIN\" = ANY ('{17855,13945,536795,18838,18837,13574 ...
and many more values.
IndexScan need to make scan for each of these values and for each value
go through the pages to check other conditions.
We already discuss some optimisations related to this case in couple of
pgsql-hackers threads. But I'm not sure we have quick solution right now.
If you want to use BitmapScan (that might be reasonable to try here) -
you need to split huge ANY (...) clause into sequence of ORs.
Also, may be parallel append could help here? if can change
corresponding startup and tuple costs to force such a plan.
--
regards, Andrei Lepikhov