Rob Sargent skrev 2010-10-01 15.43:
Then to get all statements would one simply set log_min_duration to some
arbitrarily small value?
From default postgresql.conf comments:
-1 is disabled, 0 logs all statements and their durations, > 0 logs only
statements running at least this number of milliseconds
Also see 18.7.2 in docs:
http://www.postgresql.org/docs/current/interactive/runtime-config-logging.html
Regards
/roppert
On 10/01/2010 04:30 AM, Thom Brown wrote:
2010/10/1 BjÃrn T Johansen<btj@xxxxxxxxxx>:
We are using both DB2 and PostgreSQL at work and DB2 has a nice tool, i5 Navigator, where one can enable logging of SQL statements and then it will
recommed indexes that should/could be created to increase speed...
Does there exist a similar tool for PostgreSQL?
You can set log_min_duration_statement to log statements which take
over a certain amount of time, and then use pgFouine to read the log
files and identify the most frequently run queries, and the longest
queries.
You can also use the auto_explain contrib module
(http://www.postgresql.org/docs/9.0/static/auto-explain.html) to log
the plans of queries which take too long. However, I don't think
pgFouine can use those outputs.. at least not yet.
But to find out what indexes you'll need, getting used to reading
query plans will help as it will show you more than just where
sequentials scans are taking place. It will also show you what the
planner believes a query will cost compared to how much it actually
costs, which can provide insight into tables which require vacuuming,
indexes which might need clustering, or table stats which require
modifying to match you data.
There might be a tool out there for PostgreSQL like you describe,
although I'm not personally aware of it.
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general