Re: Slow recursive CTE query questions, with row estimate and n_distinct issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greg Spiegelberg <gspiegelberg@xxxxxxxxx> writes:

> On Mon, Dec 28, 2020 at 7:51 AM Christopher Baines <mail@xxxxxxxxxxx> wrote:
>
>> Hi,
>>
>> I have a performance problem with a query. I've uploaded it, along with
>> the EXPLAIN ANALYZE output here [1].
>>
>> 1: https://explain.depesz.com/s/w5vP
>>
>> I think the query is taking longer than I'd like, as PostgreSQL is not
>> generating a great plan, in particular the estimated rows in parts of
>> the plan are way off from the actual number of rows produced, and I
>> think PostgreSQL might pick a better approach if it made more reasonable
>> estimates.
>>
>> Looking at the row numbers on [1], I think things start to go wrong at
>> row 11. The first part of the recursive CTE is estimated to generate
>> 22,122 rows, and this isn't far off the 15,670 rows it generates. The
>> WorkTable scan expects to generate 10x that though, at 221,220. Maybe
>> that's reasonable, I'm unsure?
>>
>> Things seem to continue getting out of hand from this point. The nested
>> loop on line 10 generates 232x less rows than expected, and this results
>> in an overall overestimate of 4,317x.
>>
>> My theory as to why the row estimates is way off is that the planner is
>> using the n_distinct counts for the derivation_inputs table, and they
>> are way off [2]. I've set the stats target to the highest value for this
>> table, so I'm unsure what I can do to improve these estimates?
>>
>> I've included relevant numbers for the two important tables below
>> [2][3], as well as descriptions of the tables [4][5].
>>
>> Does anyone know if my theory as to why the row estimates in the plan is
>> right, or have any ideas as how I can make the query more performant?

> Hi Chris
>
> What is your realistic target maximum execution time for the query?

It's for a web page. It can be slow, so 10 seconds is a reasonable
target.

> Ideas off the top of my head sans all the usual tuning of shared_buffers et
> al...
>
> When was the last time autovacuum vacuumed and analyzed the source tables?
>
> Can you leverage a materialized view refreshing when necessary?  Thinking
> for the 3 joined tables but could be applied to the whole result.
>
> If you can guarantee no duplicate rows before and after current UNION,
> change to UNION ALL.
>
> If you have the luxury of faster disk, such as NVMe, create a tablespace
> and move data and/or indexes there.  Adjust storage planner options
> accordingly.
>
> Personally, I like to see guardrails on RECURSIVE CTE's but you know your
> data and clients better than I so may not be necessary.  Suggestions
> include:
>  1. Limit recursion depth.  It does not seem to be the intent here though
> but ask if really the whole is necessary.  Not knowing the application but
> say 10 deep is all that is needed initially.  If the need for more is
> required, run the query again but has a different guix_revisions.commit
> starting point.  Think of it as pagination.
>  2. Cycle detection / prevention but depends on the hierarchy represented
> by the data.  Doesn't seem to be the case here since the query as-is does
> return.

Thanks for your suggestions Greg. Turns out that I actually managed to
speed the query up enough by increasing work_mem/shared_buffers. I
didn't realise one of the sequential scans was executing 14 times, but
giving PostgreSQL more resources means it just executes once, which
really helps.

There's probably still room for improvement, but it's "fast enough" now.

Thanks again,

Chris

Attachment: signature.asc
Description: PGP signature


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux