Re: Query with large number of joins

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Marco Di Cesare <Marco.DiCesare@xxxxxxxxxxxxxxxxxx> writes:
> We are using a BI tool that generates a query with an unusually large number of joins. My understanding is that with this many joins Postgres query planner can't possibly use an exhaustive search so it drops into a heuristics algorithm. Unfortunately, the query runs quite slow (~35 seconds) and seems to ignore using primary keys and indexes where available.

> Query plan here (sorry had to anonymize):
> http://explain.depesz.com/s/Uml

It's difficult to make any detailed comments when you've shown us only an
allegedly-bad query plan, and not either the query itself or the table
definitions.

However, it appears to me that the query plan is aggregating over a rather
large number of join rows, and there are very few constraints that would
allow eliminating rows.  So I'm not at all sure there is a significantly
better plan available.  Are you claiming this query was instantaneous
on SQL Server?

The only thing that jumps out at me as possibly improvable is that with
a further increase in work_mem, you could probably get it to change the
last aggregation step from Sort+GroupAggregate into HashAggregate,
which'd likely run faster ... assuming you can spare some more memory.

			regards, tom lane


-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance




[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux