Arrrghhh, it was was actually 10 (not that it really makes any
difference), should have actually waited for file to unzip before posting!!!
Ben Webber wrote:
Sorry, meant 2 gigs, not 10.
An interesting suggestion, but the problem with storing the logfiles in
a table for us would be that uncompressed, the log file for a day is
about 10 gigs. This would mean that an unacceptable amount of excess
data would accumulate in the database. It would be feasible however to
write a script to import the archived logfile into a new temporary
database on a different server, then use SQL to search it and delete the
db when finished.
Thanks for the suggestion though.
Ben.
Alvaro Herrera wrote:
Ben Webber wrote:
Hi,
I wrote a shell script to find the duration and the related statement
in the log file and place them one after the other if the duration is
over a specified time like this:-
2008-10-31 02:00:49 GMT [23683] [mp_live] LOG: statement: CLUSTER;
2008-10-31 02:04:42 GMT [23683] [mp_live] LOG: duration: 232783.684 ms
I wonder if you'd benefit from doing CSV logs and then storing them into
a table. Querying using SQL is probably going to be easier (and more
robust -- it'd work even with embedded newlines etc).
This message has been scanned for malware by SurfControl plc.
www.surfcontrol.com
--
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin