Search squid archive

Re: Reverse Proxy Funny Logging Issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 13/03/2015 4:31 a.m., dweimer wrote:
> On 01/23/2013 10:39 pm, Amos Jeffries wrote:
>> On 24/01/2013 4:13 a.m., dweimer wrote:
>>> On 2013-01-23 08:40, dweimer wrote:
>>>> On 2013-01-22 23:30, Amos Jeffries wrote:
>>>>> On 23/01/2013 5:34 a.m., dweimer wrote:
>>>>>> I just upgraded my reverse proxy server last night from 3.1.20 to
>>>>>> 3.2.6, all is working well except one of my log rules, and I can't
>>>>>> figure out why.
>>>>>
>>>>> Please run "squid -k parse" and resolve the WARNING or ERROR which
>>>>> are listed.
>>>>>
>>>>> There are two possible reasons...
>>>>>
>>>>>>
>>>>>> I have a several sites behind the server, with dstdomain access
>>>>>> rules setup.
>>>>>>
>>>>>> acl website1 dstdomain www.website1.com
>>>>>> acl website2 dstdomain www.website2.com
>>>>>> acl website2 dstdomain www.website3.com
>>>>>
>>>>> Possible reason #1 (assuming thi is an accurate copy-n-paste from
>>>>> yoru config file).....  you have no website3 ACL definition?
>>>>
>>>> That was a typo in the email, correct ACL is in the configuration,
>>>> squid -k parse outputs no warnings or errors.
>>>>
>>>>>
>>>>>> ...
>>>>>>
>>>>>> Followed by the access rules
>>>>>>
>>>>>> http_access allow website1
>>>>>> http_access allow website2
>>>>>> http_access allow website3
>>>>>> ...
>>>>>> http_access deny all
>>>>>>
>>>>>> Some are using rewrites
>>>>>> url_rewrite_program /usr/local/etc/squid/url_rewrite.py
>>>>>> url_rewrite_children 20
>>>>>> url_rewrite_access allow website1
>>>>>> url_rewrite_access allow website3
>>>>>> ...
>>>>>> url_rewrite_access deny all
>>>>>>
>>>>>> Then my access logs
>>>>>>
>>>>>> # First I grab everything in one
>>>>>> access_log daemon:/var/log/squid/access.log squid all
>>>
>>>>>
>>>>>> access_log daemon:/var/log/squid/website1.log combined website1
>>>>>> access_log daemon:/var/log/squid/website2.log combined website2
>>>>>> access_log daemon:/var/log/squid/website3.log combined website3
>>>>>> ...
>>>>>>
>>>>>> everything works, write down to one of the access logs, the data
>>>>>> shows up in the access.log file, the data shows up in the
>>>>>> individual logs for all the others, except that one.  If we use
>>>>>> website3 from the above example like my actual file the access
>>>>>> rule works on the url_rewrite_access allow line, but for some
>>>>>> reason is failing on the log line.  squid -k parse doesn't show
>>>>>> any errors, and shows a Processing: access_log
>>>>>> daemon:/var/log/squid/website3.log combined website3 line in the
>>>>>> output.
>>>>>>
>>>>>> The log in question was originally at the end of my access_log
>>>>>> list section, so I changed the order around to see if for some
>>>>>> reason it was only the last one not working, no change still only
>>>>>> that one not working, And the new last one in the list still works
>>>>>> as expected.
>>>>>>
>>>>>> I know the ACL is working as it works correctly on the rewrite
>>>>>> rule and the http access just above the log rules, anyone have any
>>>>>> ideas on how I can figure out why the log entry isn't working?
>>>>
>>>
>>> Changed lines back to daemon, changed acl on logs to the rewrite side
>>> used on the cache_peer_access lines later in the configuration. 
>>> Works now, and logs even show up with the pre-rewrite rule host
>>> information...
>>>
>>> That does make me wonder why some lines were getting logged but not
>>> all, the sites I thought were working do have higher usage, maybe I
>>> was still missing a lot from them, and just not knowing it.  I guess
>>> I will see if my webalizer reports show a huge gain in hit count over
>>> the old records from the the 3.1.20 installation, of if this behavior
>>> is only evident in the 3.2 branch.
>>>
>>
>> I think you will find that the lines being logged previously were on
>> the requests which were either not rewritten at all or were re-written
>> from another requests URL which was being logged.
>>
>> Each of the ACL-driven directive labels in squid.conf is effectively
>> an event trigger script - deciding whether or not to perform some
>> action. This only makes sense testing when that action choice is
>> requried.  Squid processing pathway checks http_access first, ... then
>> some others, ... then url_rewriting, ... then the destination
>> selection (cache_peer and others), ... then when the transaction is
>> fully completed access_log output decision are done.
>>
>> Amos
> 
> Last night I applied the FreeBSD 10.1-RELEASE-p6 Update and Upgraded the
> ports which included Squid 3.4.12, I enabled the LAX HTTP option in the
> ports configuration with adds the --enable-http-violations compile
> option. With the intention to enable broken_posts option in the
> configuration. I will hopefully be able to apply any necessary changes
> to the production system after I test them now.
> When doing this update I did have a thought the system is running in a
> FreeBSD jail and not on the base system is there a chance this issue is
> caused by running within a jail? curious if anyone has ran into specific
> issues running Squid in a FreeBSD jail before?
> 

That should only matter if you are getting permissions issues or not
able to find system parts that are depended on. I expect that type of
issue to be visible in cache.log or syslog rather than the HTTP access.log.

Amos
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users





[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux