Search squid archive

Re: Random image generator w/ reverse-proxy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Keith M. Richard wrote:
Amos,

	I have a slightly older version of squid and it is setup as an
accelerator. Let me give you the layout.
Domain name: www.my-company.org
Domain IP: 204.public address
DMZ IP Addr: 172.220.201.135 (squid server)
Internal IP: 192.1.0.59 (Web Server)

Then the line you configured:
  http_port 192.1.0.59:8888 defaultsite=www.my-company.org

is wrong. It should be:

  http_port 8888 defaultsite=www.my-company.org

both peers need easy names to ref, so changing these to:

  cache_peer 192.1.0.59 parent 443 0 no-query originserver ssl
      login=PASS name=httpsWeb

  cache_peer 192.1.0.59 parent 8888 0 no-query originserver
      name=imgServlet


also adding this:

  acl myWebsite dstdomain www.my-company.org
  acl imgServletPort myport 8888

  cache_peer_access httpsWeb deny imgServletPort
  cache_peer_access httpsWeb allow myDomain

  cache_peer_access imgServlet allow imgServletPort myDomain


if you want to be a pure-accelerator also add these:
  never_direct allow all
  http_access deny !myDomain

Squid will now perform acceleration without using DNS.

SQUID: Loads with the -D for no DNS and the host file has an entry for
192.1.0.59 as www.my-company.org.

-D just means the DNS servers are not tested before use. Squid still needs to and does resolv things its needs while running.

The host file entry should lock that domain name never to be looked up remotely though.


I see below you are using 2.6s6. That is new enough that the internal-looping comments still stand. So does the solution.

For an acceleration part squid should only be listening on a bare port or 204.public-address:port combo.


Here is a dump from my cache.log from the last restart of squid:
2008/02/18 16:32:29| Starting Squid Cache version 2.6.STABLE6 for
i686-redhat-linux-gnu...
2008/02/18 16:32:29| Process ID 23575
2008/02/18 16:32:29| With 1024 file descriptors available
2008/02/18 16:32:29| Using epoll for the IO loop
2008/02/18 16:32:29| DNS Socket created at 0.0.0.0, port 32938, FD 5
2008/02/18 16:32:29| Adding domain groupbenefits.org from
/etc/resolv.conf
2008/02/18 16:32:29| Adding nameserver 204.xxx.xxx.xxx from
/etc/resolv.conf
2008/02/18 16:32:29| Adding nameserver 204.xxx.xxx.xxx from
/etc/resolv.conf
2008/02/18 16:32:29| User-Agent logging is disabled.
2008/02/18 16:32:29| Referer logging is disabled.
2008/02/18 16:32:29| Unlinkd pipe opened on FD 10
2008/02/18 16:32:29| Swap maxSize 10240000 KB, estimated 787692 objects
2008/02/18 16:32:29| Target number of buckets: 39384
2008/02/18 16:32:29| Using 65536 Store buckets
2008/02/18 16:32:29| Max Mem  size: 8192 KB
2008/02/18 16:32:29| Max Swap size: 10240000 KB
2008/02/18 16:32:29| Local cache digest enabled; rebuild/rewrite every
3600/3600 sec
2008/02/18 16:32:29| Rebuilding storage in /var/cache/squid (CLEAN)
2008/02/18 16:32:29| Using Least Load store dir selection
2008/02/18 16:32:29| Current Directory is /
2008/02/18 16:32:29| Loaded Icons.
2008/02/18 16:32:29| Accepting accelerated HTTP connections at 0.0.0.0,
port 80, FD 12.
2008/02/18 16:32:29| Accepting accelerated HTTP connections at 0.0.0.0,
port 8888, FD 13.
2008/02/18 16:32:29| Accepting HTTPS connections at 0.0.0.0, port 443,
FD 14.
2008/02/18 16:32:29| Accepting ICP messages at 0.0.0.0, port 3130, FD
15.
2008/02/18 16:32:29| WCCP Disabled.
2008/02/18 16:32:29| Configuring Parent 192.1.0.59/443/0
2008/02/18 16:32:29| Configuring Parent 192.1.0.59/8888/0
2008/02/18 16:32:29| Ready to serve requests.

All I really want to do is setup a http accelerator for this internal
website. I have read everything I can find about this and I guess I do
not understand the options. I do know that the option in the squid.conf
change rapidly and I am not running the newest version. I am running the
version that is loaded on my Red Hat server. I have downloaded the
newest version and am planning an upgrade very soon, but I am needing to
get this going first.

Thanks,
Keith
-----Original Message-----
From: Amos Jeffries [mailto:squid3@xxxxxxxxxxxxx]
Sent: Monday, February 18, 2008 5:13 PM
To: Keith M. Richard
Cc: squid-users@xxxxxxxxxxxxxxx
Subject: Re:  Random image generator w/ reverse-proxy

All,

            I have a web page on my site that has a randomly
generated
image (Alpha numeric picture) to allow users to register. I am using
squid
as an accelerator in my DMZ to this internal web server. Right now
the
image is coded as an unsecured (http) link/servlet on port 8888,
which
is
just a random port. This is embedded in a HTTPS page. If I don't use
squid
it works but through squid if fails to display the image.
            I have checked the firewall and it is properly
configured.
When I check the firewalls log, it shows the request to 8888 from
the
outside, but those same requests are never passed through squid for
some
reason. I have also run Wireshark on the squid server to capture the
traffic as users made requests and I see the TCP [SYN] from the
client
to
the squid servers IP address, but then the squid sends a TCP [RST,
ACK].
When I watch the same request being made from the squid server
running
FireFox to the internal web server it makes the handshake. I cannot
figure
out why the reset is happening.
You have a forwarding loop in the config below.

        I modified the logformat so that I can get some readable
data
and
this is what I get from the output:

18/Feb/2008:13:03:12 -0600 xxx.xxx.xxx.xxx:51651 192.168.0.135:8888
TCP_MISS/404 697 GET
http://www.my-company.org/randomimages/servlet/org.groupbenefits.por
tal.RandomImageGenServlet? FIRST_UP_PARENT/192.1.0.59 text/html

******************************************************************
# Basic config
acl all src 0.0.0.0/0.0.0.0
acl manager proto http cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 8080 # safe
acl Safe_ports port 8888 # safe
Check #1. access to port 8888 is possible. Great.

acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

# Accelerator Mode
http_port 80 defaultsite=www.my-company.org
check #2. squid is configured as accel or vhost? NOPE.

http_port 192.1.0.59:8888 defaultsite=www.my-company.org
note #1. squid is itself 192.1.0.59:8888. we come back to this later.

https_port 443 cert=/etc/squid/cert/portalcert.pem
key=/etc/squid/cert/key.pem defaultsite=www.my-company.org
note #2. squid is itself 0.0.0.0:443. we come back to this later.

cache_peer 192.1.0.59 parent 443 0 no-query originserver ssl
login=PASS
name=www.my-company.org
so squid is its own parent (see note #2)? All requests destined there
will
detect a loop and die after timeouts.

cache_peer 192.1.0.59 parent 8888 0 no-query originserver
so squid is its own parent (see note #1)? All requests destined there
will
detect a loop and die after timeouts.

visible_hostname www.my-company.org
acl ourSite dstdomain www.my-company.org
http_access allow ourSite
Okay. so it _IS_ supposed to be an accelerator. Right now its just an
open
proxy for that domain. This is why check #2 failed.

# Log file and cache options
logformat squid %tl %>a:%>p %la:%lp %Ss/%03Hs %<st %rm %ru %Sh/%<A
%mt
cache_dir ufs /var/cache/squid 100 16 256
cache_swap_low 90
cache_swap_high 95
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
no need for that just set it to 'none' save yourself some disk I/O.

pid_filename /var/spool/squid/squid.pid

#Cache Manager settings
http_access allow manager localhost
http_access deny manager
http_access deny all
Okay. So there are two parent data sources: squid or squid. Neither of
which will ever finish looping the request back to squid.
And if you are lucky you have another IP registerd for
www.my-company.org
which squid can use to pull the www data from directly. Otherwise you
are
left with an implicit peer via DIRECT (www.my-company.org A
192.1.0.59),
which surprise-surprise has the same effect a both configured peers.

Can you see the problem?

cache_peer MUST NOT loop back to the squid listening ports. In the
absence
of configured routing inside squid, it accepts any input from its
http(s)_port's and requests from any available cache_peer or DIRECT
from
the DNS-resolved web server.

What you need to do is set the cache_peer IP to being the real
(secret) IP
of the original web servers for the site and extras. Publish the
public IP
of squid in DNS as www.my-company.org. And set either a
cache_peer_access
or cache_peer_domain to route the requests to the proper peer.

Given that you are using a port/url-based determiner for the image
servlet. I would suggest cache_peer_access with various ACL to direct
requets to the right place.

Amos




--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux