I have better results with this version. Basically, I run a first query only made for aggregation, and then do a JOIN to get other needed data.
* SQL : http://paste.debian.net/1070007/
* EXPLAIN: https://explain.depesz.com/s/D0l
Not really "fast", but I gained 30%
Le lun. 25 févr. 2019 à 09:54, kimaidou <kimaidou@xxxxxxxxx> a écrit :
Thanks for your answers. I tried with> set session work_mem='250MB';
> set session geqo_threshold = 20;
> set session join_collapse_limit = 20;It seems to have no real impact :Indeed an index cannot really be used for sorting here, based on the complexity of the returned fields.Wich strikes me is that if I try to simplify it a lot, removing all data but the main table (occtax.observation) primary key cd_nom and aggregate, the query plan should be able tu use the cd_nom index for sorting and provide better query plan (hash aggregate), but it does not seems so :* EXPLAIN : https://explain.depesz.com/s/FR3h -> a group aggregate is used, which : GroupAggregate 1 10,639.313 ms 72.6 %It is better, but I think 10s for such a query seems bad perf for me.RegardsMichaëlLe ven. 22 févr. 2019 à 19:06, Tom Lane <tgl@xxxxxxxxxxxxx> a écrit :Michael Lewis <mlewis@xxxxxxxxxxx> writes:
> Does the plan change significantly with this-
> set session work_mem='250MB';
> set session geqo_threshold = 20;
> set session join_collapse_limit = 20;
Yeah ... by my count there are 16 tables in this query, so raising
join_collapse_limit to 15 is not enough to ensure that the planner
considers all join orders. Whether use of GEQO is a big problem
is harder to say, but it might be.
regards, tom lane