On 03/12/2015 10:31 am, dweimer wrote:
On 01/23/2013 10:39 pm, Amos Jeffries wrote:
On 24/01/2013 4:13 a.m., dweimer wrote:
On 2013-01-23 08:40, dweimer wrote:
On 2013-01-22 23:30, Amos Jeffries wrote:
On 23/01/2013 5:34 a.m., dweimer wrote:
I just upgraded my reverse proxy server last night from 3.1.20 to
3.2.6, all is working well except one of my log rules, and I can't
figure out why.
Please run "squid -k parse" and resolve the WARNING or ERROR which
are listed.
There are two possible reasons...
I have a several sites behind the server, with dstdomain access
rules setup.
acl website1 dstdomain www.website1.com
acl website2 dstdomain www.website2.com
acl website2 dstdomain www.website3.com
Possible reason #1 (assuming thi is an accurate copy-n-paste from
yoru config file)..... you have no website3 ACL definition?
That was a typo in the email, correct ACL is in the configuration,
squid -k parse outputs no warnings or errors.
...
Followed by the access rules
http_access allow website1
http_access allow website2
http_access allow website3
...
http_access deny all
Some are using rewrites
url_rewrite_program /usr/local/etc/squid/url_rewrite.py
url_rewrite_children 20
url_rewrite_access allow website1
url_rewrite_access allow website3
...
url_rewrite_access deny all
Then my access logs
# First I grab everything in one
access_log daemon:/var/log/squid/access.log squid all
access_log daemon:/var/log/squid/website1.log combined website1
access_log daemon:/var/log/squid/website2.log combined website2
access_log daemon:/var/log/squid/website3.log combined website3
...
everything works, write down to one of the access logs, the data
shows up in the access.log file, the data shows up in the
individual logs for all the others, except that one. If we use
website3 from the above example like my actual file the access
rule works on the url_rewrite_access allow line, but for some
reason is failing on the log line. squid -k parse doesn't show
any errors, and shows a Processing: access_log
daemon:/var/log/squid/website3.log combined website3 line in the
output.
The log in question was originally at the end of my access_log
list section, so I changed the order around to see if for some
reason it was only the last one not working, no change still only
that one not working, And the new last one in the list still works
as expected.
I know the ACL is working as it works correctly on the rewrite
rule and the http access just above the log rules, anyone have any
ideas on how I can figure out why the log entry isn't working?
Changed lines back to daemon, changed acl on logs to the rewrite side
used on the cache_peer_access lines later in the configuration.
Works now, and logs even show up with the pre-rewrite rule host
information...
That does make me wonder why some lines were getting logged but not
all, the sites I thought were working do have higher usage, maybe I
was still missing a lot from them, and just not knowing it. I guess
I will see if my webalizer reports show a huge gain in hit count over
the old records from the the 3.1.20 installation, of if this behavior
is only evident in the 3.2 branch.
I think you will find that the lines being logged previously were on
the requests which were either not rewritten at all or were re-written
from another requests URL which was being logged.
Each of the ACL-driven directive labels in squid.conf is effectively
an event trigger script - deciding whether or not to perform some
action. This only makes sense testing when that action choice is
requried. Squid processing pathway checks http_access first, ... then
some others, ... then url_rewriting, ... then the destination
selection (cache_peer and others), ... then when the transaction is
fully completed access_log output decision are done.
Amos
Last night I applied the FreeBSD 10.1-RELEASE-p6 Update and Upgraded
the ports which included Squid 3.4.12, I enabled the LAX HTTP option
in the ports configuration with adds the --enable-http-violations
compile option. With the intention to enable broken_posts option in
the configuration. I will hopefully be able to apply any necessary
changes to the production system after I test them now.
When doing this update I did have a thought the system is running in a
FreeBSD jail and not on the base system is there a chance this issue
is caused by running within a jail? curious if anyone has ran into
specific issues running Squid in a FreeBSD jail before?
Well I am at a loss, debugging hasn't led to anything more than a
timeout occurs. I was able to create a test PHP form to upload files on
an Apache server and upload up to a 264MB file. I didn't try any larger
files though I suspect it would work up to the configured 1024MB that I
had Apache configured for. So its not all HTTPS only those files going
to our OWA and Sharepoint servers. The only settings I can find that
changes the behavior at all is to change the "write_timeout" to
something smaller, like 45 seconds and then it errors sooner instead of
taking forever to give up.
I tried uninstalling the Squid 3.4 FreeBSD port and using the 3.3 port
instead on the test system, no change. I also tried installing 3.5 from
source using the same configure options that my 3.4 port returned with
squid -v, again no change.
I have verified that the IIS logs show a client request timeout has
occurred, the broken_posts allow didn't create any change in behavior. I
do know that if I point the browser directly to the Exchange server it
works, so its only broken going through the reverse the proxy. If I
point the browser through a forwarding Squid proxy that knows how to
talk directly to the exchange server instead of the reverse proxy it
works with no special settings. If I post a large debugging file to a
website do I have any volunteers to look at it and see if they can see
what's going on?
--
Thanks,
Dean E. Weimer
http://www.dweimer.net/
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users