Dear All,
I've just joined this list, so let me first thank you in advance for
your hospitality.
I'm having lots of trouble with variously slow running queries on a
production system. I've tried all the "obvious" fixes: changing the
query planner, checking for indexing, autovacuum, making sure the thing
has heaps of memory (20GB), running on solid state disks etc.
1. Is there any way to debug or trace a running query? I think I have
all the logging options turned on, but I'd like to see something like:
"Currently reading 3452 rows from table_x, at 0.2 us per row" or
whatever, being really, really verbose in the logfiles.
Likewise, is there any way to check whether, for example, postgres is
running out of work memory?
2. Is there any way, whatsoever, to get any kind of "progress bar" for a
running query? I know these things have terrible faults, but anything
monotonic would be better than running completely blind.
[There's a wonderful paper here:
http://pages.cs.wisc.edu/~naughton/includes/papers/multiquery.pdf
which seems to have got 90% of the way there, but appears to have been
abandoned as it didn't get all 100% of the way]
The operations people in the warehouse are currently going crazy because
we can't ever answer the question "when will this query complete?". I
know it's hard to do accurately, but knowing the difference between "5
seconds to go" and "5 hours to go" would be so fantastically useful.
Thanks,
Richard
P.S. Sometimes, some queries seem to benefit from being cancelled and
then immediately re-started. As there are being run in a transaction, I
can't see how this could make a difference. Am I missing anything
obvious? Occasionally, a re-start of postgresql-8.4l itself seems to help.
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance