Hi all!
Sorry the delay (holidays).
Well, the most expensive sequencial scan was solved.
I asked the db team to drop the index and recreate it and guess what: now postgresql is using it and the time dropped.
(thank you, @Gerardo Herzig!)I think there's still room for improvement, but the problem is not so crucial right now.
I'll try to investigate every help mentioned here. Thank you all.
@Daniel Blanch
I'll make some tests with a materialized view. Thank you.
On systems side: ask them if they have not changed anything in effective_cache_size and shared_buffers parameters, I presume they haven’t change anything related to costs.
Replying your comment, I think they tunned the server:
effective_cache_size = 196GB
shared_buffers = 24GB (this shouldn't be higher?)
@Kevin Grittner
sorry, but I'm not sure when the autovacuum is aggressive enough, but here my settings related:
autovacuum |on
autovacuum_analyze_scale_factor |0.05
autovacuum_analyze_threshold |10
autovacuum_freeze_max_age |200000000
autovacuum_max_workers |3
autovacuum_multixact_freeze_max_age |400000000
autovacuum_naptime |15s
autovacuum_vacuum_cost_delay |10ms
autovacuum_vacuum_cost_limit |-1
autovacuum_vacuum_scale_factor |0.1
autovacuum_vacuum_threshold |10
autovacuum_work_mem |-1
@Merlin Moncure
Big gains (if any) are likely due to indexing strategy.
I do see some suspicious casting, for example:
Join Filter: ((four_charlie.delta_tango)::integer =
(six_quebec.golf_bravo)::integer)
Are you casting in the query or joining through dissimilar data types?
No casts in query. The joins are on same data types.
Thank you all for the answers. Happy 2017!
Flávio Henrique
--------------------------------------------------------"There are only 10 types of people in the world: Those who understand binary, and those who don't"
--------------------------------------------------------
On Thu, Jan 5, 2017 at 12:40 PM, Merlin Moncure <mmoncure@xxxxxxxxx> wrote:
On Tue, Dec 27, 2016 at 5:50 PM, Flávio Henrique <yoshimit@xxxxxxxxx> wrote:
> Hi there, fellow experts!
>
> I need an advice with query that became slower after 9.3 to 9.6 migration.
>
> First of all, I'm from the dev team.
>
> Before migration, we (programmers) made some modifications on query bring
> it's average time from 8s to 2-3s.
>
> As this query is the most executed on our system (it builds the user panel
> to work), every bit that we can squeeze from it will be nice.
>
> Now, after server migration to 9.6 we're experiencing bad times with this
> query again.
>
> Unfortunately, I don't have the old query plain (9.3 version) to show you,
> but in the actual version (9.6) I can see some buffers written that tells me
> that something is wrong.
>
> Our server has 250GB of memory available, but the database team says that
> they can't do nothing to make this query better. I'm not sure, as some
> buffers are written on disk.
>
> Any tip/help will be much appreciated (even from the query side).
>
> Thank you!
>
> The query plan: https://explain.depesz.com/s/5KMn
>
> Note: I tried to add index on kilo_victor table already, but Postgresql
> still thinks that is better to do a seq scan.
Hard to provide more without the query or the 'old' plan. Here are
some things you can try:
*) Set effective_io_concurrency high. You have some heap scanning
going on and this can sometimes help (but it should be marginal).
*) See if you can get any juice out of parallel query
*) try playing with enable_nestloop and enable_seqscan. these are
hail mary passes but worth a shot.
Run the query back to back with same arguments in the same database
session. Does performance improve?
Big gains (if any) are likely due to indexing strategy.
I do see some suspicious casting, for example:
Join Filter: ((four_charlie.delta_tango)::integer =
(six_quebec.golf_bravo)::integer)
Are you casting in the query or joining through dissimilar data types?
I suspect your database team might be incorrect.
merlin