On 10.12.2011 23:40, Daniel Cristian Cruz wrote: > At work we have a 24 cores server, with a load average around 2.5. A single query is processes by a single CPU, so even if the system is not busy a single query may hit CPU bottleneck. The real issue is the instrumentation overhead - timing etc. On some systems (with slow gettimeofday) this may be a significant problem as the query hits the CPU boundary sooner. > I don't know yet if a system which use some unused CPU to minimize the > load of a bad query identified early is bad or worse. Not really, due to the "single query / single CPU" rule. > Indeed, I don't know if my boss would let me test this at production > too, but it could be good to know how things work in "auto-pilot" mode. What I was pointing out is that you probably should not enable loggin "explain analyze" output by "auto_explain.log_analyze = true". There are three levels of detail: 1) basic, just log_min_duration_statement 2) auto_explain, without 'analyze' - just explain plain 3) auto_explain, with 'analyze' - explain plan with actual values Levels (1) and (2) are quite safe (unless the minimum execution time is too low). Tomas > > 2011/12/10 Tomas Vondra <tv@xxxxxxxx <mailto:tv@xxxxxxxx>> > > There's auto_explain contrib module that does exactly what you're asking > for. Anyway, explain analyze is quite expensive - think twice before > enabling that on production server where you already have performance > issues. > > Tomas > > On 10.12.2011 17:52, Daniel Cristian Cruz wrote: > > Hi all, > > > > I'm trying to figure out some common slow queries running on the > server, > > by analyzing the slow queries log. > > > > I found debug_print_parse, debug_print_rewritten, debug_print_plan, > > which are too much verbose and logs all queries. > > > > I was thinking in something like a simple explain analyze just for > > queries logged with log_min_duration_statement with the query too. > > > > Is there a way to configure PostgreSQL to get this kind of > information, > > maybe I'm missing something? Is it too hard to hack into sources > and do > > it by hand? I never touched PostgreSQL sources. > > > > I'm thinking to write a paper that needs this information for my > > postgraduate course. The focus of my work will be the log data, not > > PostgreSQL itself. If I succeed, maybe it can be a tool to help > all of us. > > > > Thank you, > > -- > > Daniel Cristian Cruz > > クルズ クリスチアン ダニエル > > > -- > Sent via pgsql-performance mailing list > (pgsql-performance@xxxxxxxxxxxxxx > <mailto:pgsql-performance@xxxxxxxxxxxxxx>) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance > > > > > -- > Daniel Cristian Cruz > クルズ クリスチアン ダニエル -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance