Search Postgresql Archives

Re: Query sometimes takes down server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



good idea although tweaks to geqo_pool_size, geqo_generations, and geqo_selection_bias will affect all queries

For larger and unwieldy queries you might want to look at breaking the queries down to smaller pieces e.g.
Break each statement to 2 tables with 1 join (preferrably inner join with USING) ..and of course indexing the columns used in the join

If for any reason the results become unwieldy (plan table shows cartesian joins or FTS or dredded SortMerge when NestedLoops will do) you may want to consider Creating a Temp Table which will hold the results for the current session for each join
Alternatively creating a view which contains the results in Read only format..poddibly using a materialized view to allow indexes and refresh capability

HTH
Martin
______________________________________________
Disclaimer and confidentiality note
Everything in this e-mail and any attachments relates to the official business of Sender. This transmission is of a confidential nature and Sender does not endorse distribution to any party other than intended recipient. Sender does not necessarily endorse content contained within this transmission.




> Subject: Re: Query sometimes takes down server
> From: pgsql@xxxxxxxxxxx
> To: jason@xxxxxxxxxxxxxxxx
> CC: pgsql-general@xxxxxxxxxxxxxx
> Date: Fri, 16 Jan 2009 10:07:26 -0800
>
> On Fri, 2009-01-16 at 08:43 -0600, Jason Long wrote:
> > The numbers in the table names are due to hibernate generating the
> > query.
>
> Well, that's what auto-generated schemas and queries do, I guess.
>
> > Now we are getting somewhere.
> > Someone suggested tweaking the genetic algorithm parameters.
> > Has anyone else had to do this and what results did you acheive?
> > Can someone offer me some detailed advice on tweaking these
> > parameters?
>
> There are a lot of tables, so no matter what you do will require GEQO
> (the genetic algorithm I was talking about).
>
> The fact that some of the plans are fast is good news: it means that
> it's possible to execute the query quickly.
>
> The other good news is that the slower plans are, indeed, estimated to
> be slower in the examples you provided (not by exactly proportional
> amounts, but it's still a good sign). If the estimations are so far off
> that they are basically random, GEQO won't help much; but in your case
> they look surprisingly good.
>
> I would try increasing geqo_effort, and tweaking geqo_pool_size and
> geqo_generations (mostly try increasing these last two, but smaller
> values might be useful), and tweak geqo_selection_bias randomly between
> 1.5 and 2.
>
> See useful ranges of the parameters here:
> http://www.postgresql.org/docs/8.3/static/runtime-config-query.html
>
> When you start to get stable execution times (make sure you don't just
> get lucky once), keep the values you're using. Post to the list with
> your results.
>
> You may be able to fix some of your queries (like this one), but I
> suspect this will just make the problem more rare. When you come up with
> some new query later, I think the problem will come back. The solution
> is really to have a more reasonable schema, something that PostgreSQL
> (and humans) can understand well enough to optimize.
>
> Regards,
> Jeff Davis
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


Windows Live™: Keep your life in sync. See how it works.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux