Hi all!
Sorry the delay (holidays).
Well, the most expensive sequencial scan was solved.
I asked the db team to drop the index and recreate it and guess what: now postgresql is using it and the time dropped.
(thank you, @Gerardo Herzig!)
I think there's still room for improvement, but the problem is not so crucial right now.
I'll try to investigate every help mentioned here. Thank you all.
@Daniel Blanch
I'll make some tests with a materialized view. Thank you.
On systems side: ask them if they have not changed anything in effective_cache_size and shared_buffers parameters, I presume they haven’t change anything related to costs.
Replying your comment, I think they tunned the server:
effective_cache_size = 196GB
shared_buffers = 24GB (this shouldn't be higher?)
@Kevin Grittner
sorry, but I'm not sure when the autovacuum is aggressive enough, but here my settings related:
autovacuum |on
autovacuum_analyze_scale_factor |0.05
autovacuum_analyze_threshold |10
autovacuum_freeze_max_age |200000000
autovacuum_max_workers |3
autovacuum_multixact_freeze_max_age |400000000
autovacuum_naptime |15s
autovacuum_vacuum_cost_delay |10ms
autovacuum_vacuum_cost_limit |-1
autovacuum_vacuum_scale_factor |0.1
autovacuum_vacuum_threshold |10
autovacuum_work_mem |-1
@Merlin Moncure
Big gains (if any) are likely due to indexing strategy.
I do see some suspicious casting, for example:
Join Filter: ((four_charlie.delta_tango)::integer =
(six_quebec.golf_bravo)::integer)
Are you casting in the query or joining through dissimilar data types?
No casts in query. The joins are on same data types.
Thank you all for the answers. Happy 2017!