Re: performance of analytical query

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 12 Nov 2021 at 03:41, Justin Pryzby <pryzby@xxxxxxxxxxxxx> wrote:
On Thu, Nov 11, 2021 at 08:20:57PM +0100, Jiří Fejfar wrote:
> Hi folks,
>
> we have found that (probably after VACUUM ANALYZE) one analytical query
> starts to be slow on our production DB. Moreover, more or less the same
> plan is used on our testing data (how to restore our testing data is
> described at the end of this email), or better to say the same problem
> exists in both (production vs testing data) scenarios: nested loop scanning
> CTE several thousand times is used due to the bad estimates:
> https://explain.dalibo.com/plan/sER#plan/node/87 (query is included on
> dalibo).

> Do you have any idea how to get HASH JOINS in the CTE w_1p_data instead of
> NESTED LOOPs?
> * Add some statistics to not get bad estimates on "lower-level" CTEs?

Do you know why the estimates are bad ?

I have no clear insight at the moment... problem is probably with bad estimates which chain along the whole tree of nodes... one bad estimate was after aggregation for example... probably, I would need to explore carefully whole execution plan and identify sources of unprecise estimates and correct it with additional, more precise statistics when possible, right?
 
Index Scan using t_map_plot_cell__cell_gid__idx on cm_plot2cell_mapping cm_plot2cell_mapping (cost=0.29..18.59 rows=381 width=12) (actual time=0.015..2.373 rows=3,898 loops=1)
    Index Cond: (cm_plot2cell_mapping.estimation_cell = f_a_cell.estimation_cell)
    Buffers: shared hit=110

I don't know, but is the estimate for this portion of the plan improved by doing:
| ALTER TABLE f_a_cell ALTER estimation_cell SET STATISTICS 500; ANALYZE f_a_cell;

this does not help to the plan as a whole... but I am thinking about increasing this parameter (size of sample) at the DB level
 
> * In a slightly more complicated function I used temporary tables to be
> able to narrow statistics [2] but I am afraid of system table bloating
> because of the huge amount of usage of this function on the production
> (hundred thousand of calls by day when data are to be analyzed).

I would try this for sure - I think hundreds of calls per day would be no
problem.  If you're concerned, you could add manual calls to do (for example)
VACUUM pg_attribute; after dropping the temp tables.

it is hundreds of thousands of calls (10^5) ... but yes I got some hints how to avoid bloating (basically use temp tables longer and truncate them instead of deleting when possible)
 
BTW, we disable nested loops for the our analytic report queries.  I have never
been able to avoid pathological plans any other way.

I will think about that.

AND

we further simplified the query and get again one good execution plan https://explain.dalibo.com/plan/tCk :-)

I have some thoughts now:

* I know that PG is focused on OLTP rather then analytics, but we are happy with it at all and do not wish to use another engine for analytical queries... isn't somewhere some "PG analytical best practice" available?
* It seems that the the form / style of query has great impact on execution plans... I was very happy with writing queries as CTEs on top of other CTEs or layering VIEWS because you can really focus on the semantics of the problem and I hoped that planner will somehow magically "compile" my code and get something good enough with respect to performance. Of course, I have to not use materialized CTEs, but it was not possible with NOT MATERIALIZED version as performance was bad and I was not able even to get oriented in exec. plan...

Thank you for your ideas! J.

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux