On Mon, Jul 12, 2010 at 07:46:30PM +0200, Pavel Stehule wrote: > 2010/7/12 Josip Rodin <joy@xxxxxxxxxxxxxx>: > > On Mon, Jul 12, 2010 at 04:38:48PM +0200, Pavel Stehule wrote: > >> 2010/7/12 Josip Rodin <joy@xxxxxxxxxxxxxx>: > >> > On Mon, Jul 12, 2010 at 02:06:43PM +0800, Craig Ringer wrote: > >> >> Meh, personally I'll stick to the good old profiling methods "is it fast > >> >> enough", "\timing", and "explain analyze". > >> > > >> > I agree. Some hint could be included in 'explain analyze' output, maybe just > >> > to separate the timings for things that are well covered by the query plan > >> > optimizer from those that aren't. I found this in a line like this: > >> > >> it is useles for functions - explain doesn't show lines of executed > >> functions. Can you show some example of some more complex query. > > > > It doesn't have to show me any lines, but it could tell me which part of > > the query is actually being optimized, and OTOH which part is simply being > > executed N times unconditionally because it's a function that is marked as > > volatile. That alone would be a reasonable improvement. > > this is different kinds of problems. You can have a very slow a > immutable function or very fast volatile function. And with wrong > function design your functions can be a 10 times slower. yeah - you > can multiply it via wrong or good design with wrong or good stability > flag. Well, it was demonstrated previously that the domain of very fast volatile plpgsql functions is inherently limited with their startup overhead, which makes them inherently slow with small data sets (and/or nesting). Sure, this can become relatively insignificant on very large data sets, but as long as there is a reasonable chance that this slows down a query by an order of magnitude, IMHO it would be better to note it than to ignore it. -- 2. That which causes joy or happiness. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general