On Jul 20, 2009, at 10:24 AM, Bill Moran wrote:
In response to "Greg Sabino Mullane" <greg@xxxxxxxxxxxx>:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160
In my experience, I've found that enabling full logging for a
short time
(perhaps a few hours) gathers enough data to run through tools like
pgFouine and find problem areas.
It is not possible for us. Logging millions of statements take too
much time.
This is a ridiculous statement.
No, it isn't.
In actual practice, full query logging
is 1/50 the amount of disk I/O as the actual database activity. If
your
systems are so stressed that they can't handle another 2% increase,
then
you've got bigger problems lurking.
That's really not true. Ever, probably, but certainly in my situation.
A lot of my inserts are large text fields (10k - 1M). Many of those
are never
read again.
If I want to log all statements or slow statements then a lot of
what's logged is those large inserts. There's currently no way
to log the statement without logging all the data inserted, so
the log traffic produced by each insert is huge.
Logging via syslog on Solaris I've had reports of that slowing
the machine down to unusability. (I'm fairly sure I know why,
and I suspect you can guess too, but this is on customer machines).
The hit due to logging can be huge, even on fairly overpowered
systems.
There are a lot of limitations in the current logging system when
it comes to capturing data for performance analysis, and that's
one of them. There's certainly significant room for improvement
in the logging system - some of that can be added externally,
but some of it really needs to be done within the core code.
Cheers,
Steve
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general