On Thu, 2007-08-30 at 15:39 -0600, Guy Fraser wrote: > Below is the logging section from the postgresql.conf file. It > would appear that you can configure PostgreSQL to log as much > detail as you want to where you want. You can then write a > program to parse the log file and present the information you > want based on your needs. I do something similar with a different > application which I have configured to use syslog. In syslog > I direct the logging data to a pipe which I read as a stream from > an application I wrote, that processes the realtime activity and > extracts the useful information which I send to an SQL database > for further processing on a batch basis. Capturing everything possible via logging and filtering/processing later was a consideration of mine. It might work, but it's not ideal. I'm a little concerned about it for a few reasons: 1. Performance (although I haven't measured) 2. Trying to figure out which tables are actually being read by grepping the logs is a mess. What if someone makes a rule/view/function over the table (and they have read permissions on the table), and then reads from that? There may even be built-in functions that could accomplish that as long as the user has read access to the table. 3. I'd have to have the schema or tablename unique enough that filtering wouldn't get false positives. Solvable, but not an elegant solution either. My concern is that logging is for logging, not auditing. There's some overlap, but logging doesn't seem to do everything that I need directly. Regards, Jeff Davis ---------------------------(end of broadcast)--------------------------- TIP 6: explain analyze is your friend