Thanks for the hint - we already have got quite an elaborate db-logging mechanism which reads from the logfiles (as in 3.0 squid did only allow logging to file). And as we need to go live soon, I want to avoid starting with a last-minute changes to the logging... So I will have to "redesign" the logging for now to just log to a single file and maybe modify later... Thanks, Martin -----Original Message----- From: Amos Jeffries [mailto:squid3@xxxxxxxxxxxxx] Sent: Dienstag, 07. Mai 2013 05:15 To: squid-users@xxxxxxxxxxxxxxx Subject: Re: multiple "logfile-daemon" logging to same logfile On 6/05/2013 9:56 p.m., Martin Sperl wrote: > Hi! > > I need to be logging slightly different access-log pattern to the same logfile (logging different headers depending on host accessed). > > Looks essentially like this > access_log daemon:/logfile LOGPATTERN_1 ACL_SELECTOR_1 > access_log daemon:/logfile LOGPATTERN_2 ACL_SELECTOR_2 > access_log daemon:/logfile LOGPATTERN_OTHER all > > This config produces 3 different "logfile-daemon" processes that all log to the same logfile. > > Is there a means to get only one of those "logfile-daemon(s)" for a single file and retain the capability to log different headers? > Is there a "risk" (line corruption where 2 different lines from different daemons get "merged into a single line,.) running these daemons in parallel? The file daemon helper bundled with Squid cannot do this safely AFAIK, it was written to be the sole writer to each log file. You will need to either patch it to append to the file and perform locking+write+unlock. Or write a different helper which can receive log lines containing both your headers on all requests and filter them into the file. The purpose of log daemons is to offload all this filtering and management from Squid after all. Alternatively, it would be much easier to use a UDP, syslog, or TCP outputs to send two streams of log line packets to one logging backend. Alternatively, using a database backend instead of file. The DB log daemon helper in 3.3+ can do what you want. Amos This message and the information contained herein is proprietary and confidential and subject to the Amdocs policy statement, you may review at http://www.amdocs.com/email_disclaimer.asp