I have been a systems administrator of
PostgreSQL since version 7.0. I am also a primary logging
architect for a 4000+ device network infrastructure and SEIM
alerting infrastructure architect.
Syslog is by far the best approach to logging. With the right product it is way FASTER than stderr and there are a ton of tools to parse, analyze, view and report on syslog streams. There is one caveat with PostgreSQL version <= 9.5 and that is that syslog messages are wrapped at approx 80 characters which makes parsing and error detection problematic. pgBadger may address this limitation by unwrapping such logs messages whereas generic log parsing engines do not have any specialized knowledge about how PostgreSQL lines might be wrapped. PostgreSQL version <= 9.5 this is a compile time only option but starting in 9.6 this is a runtime configuration directive. There are two strong reasons for using syslog. 1. In a well architected logging solution the syslog process on the host will also send the log messages to a central log server. This means that if the database server is compromised leading to an untrusted set of log files, there is a trusted copy of the logs on another server. 2. When running a high availability or clustered database all of the logs can be aggregated to a central log server which places all of the logs from all of the database servers into one easy to read/parse/process location. I hope this provides some rationale for using syslog. Evan. On 08/04/2017 09:26 AM, Don Seiler wrote:
|