Search squid archive

Re: Random image generator w/ reverse-proxy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> All,
>
>             I have a web page on my site that has a randomly generated
> image (Alpha numeric picture) to allow users to register. I am using squid
> as an accelerator in my DMZ to this internal web server. Right now the
> image is coded as an unsecured (http) link/servlet on port 8888, which is
> just a random port. This is embedded in a HTTPS page. If I don't use squid
> it works but through squid if fails to display the image.
>             I have checked the firewall and it is properly configured.
> When I check the firewalls log, it shows the request to 8888 from the
> outside, but those same requests are never passed through squid for some
> reason. I have also run Wireshark on the squid server to capture the
> traffic as users made requests and I see the TCP [SYN] from the client to
> the squid servers IP address, but then the squid sends a TCP [RST, ACK].
> When I watch the same request being made from the squid server running
> FireFox to the internal web server it makes the handshake. I cannot figure
> out why the reset is happening.

You have a forwarding loop in the config below.

>         I modified the logformat so that I can get some readable data and
> this is what I get from the output:
>
> 18/Feb/2008:13:03:12 -0600 xxx.xxx.xxx.xxx:51651 192.168.0.135:8888
> TCP_MISS/404 697 GET
> http://www.my-company.org/randomimages/servlet/org.groupbenefits.por
> tal.RandomImageGenServlet? FIRST_UP_PARENT/192.1.0.59 text/html
>
> ******************************************************************
> # Basic config
> acl all src 0.0.0.0/0.0.0.0
> acl manager proto http cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443
> acl Safe_ports port 80 # http
> acl Safe_ports port 21 # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70 # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 8080 # safe
> acl Safe_ports port 8888 # safe

Check #1. access to port 8888 is possible. Great.

> acl Safe_ports port 1025-65535 # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
>
> # Accelerator Mode
> http_port 80 defaultsite=www.my-company.org

check #2. squid is configured as accel or vhost? NOPE.

> http_port 192.1.0.59:8888 defaultsite=www.my-company.org

note #1. squid is itself 192.1.0.59:8888. we come back to this later.

> https_port 443 cert=/etc/squid/cert/portalcert.pem
> key=/etc/squid/cert/key.pem defaultsite=www.my-company.org

note #2. squid is itself 0.0.0.0:443. we come back to this later.

> cache_peer 192.1.0.59 parent 443 0 no-query originserver ssl login=PASS
> name=www.my-company.org

so squid is its own parent (see note #2)? All requests destined there will
detect a loop and die after timeouts.

> cache_peer 192.1.0.59 parent 8888 0 no-query originserver

so squid is its own parent (see note #1)? All requests destined there will
detect a loop and die after timeouts.

> visible_hostname www.my-company.org
> acl ourSite dstdomain www.my-company.org
> http_access allow ourSite

Okay. so it _IS_ supposed to be an accelerator. Right now its just an open
proxy for that domain. This is why check #2 failed.

>
> # Log file and cache options
> logformat squid %tl %>a:%>p %la:%lp %Ss/%03Hs %<st %rm %ru %Sh/%<A %mt
> cache_dir ufs /var/cache/squid 100 16 256
> cache_swap_low 90
> cache_swap_high 95
> access_log /var/log/squid/access.log squid
> cache_log /var/log/squid/cache.log
> cache_store_log /var/log/squid/store.log

no need for that just set it to 'none' save yourself some disk I/O.

> pid_filename /var/spool/squid/squid.pid
>
> #Cache Manager settings
> http_access allow manager localhost
> http_access deny manager
> http_access deny all

Okay. So there are two parent data sources: squid or squid. Neither of
which will ever finish looping the request back to squid.
And if you are lucky you have another IP registerd for www.my-company.org
which squid can use to pull the www data from directly. Otherwise you are
left with an implicit peer via DIRECT (www.my-company.org A 192.1.0.59),
which surprise-surprise has the same effect a both configured peers.

Can you see the problem?

cache_peer MUST NOT loop back to the squid listening ports. In the absence
of configured routing inside squid, it accepts any input from its
http(s)_port's and requests from any available cache_peer or DIRECT from
the DNS-resolved web server.

What you need to do is set the cache_peer IP to being the real (secret) IP
of the original web servers for the site and extras. Publish the public IP
of squid in DNS as www.my-company.org. And set either a cache_peer_access
or cache_peer_domain to route the requests to the proper peer.

Given that you are using a port/url-based determiner for the image
servlet. I would suggest cache_peer_access with various ACL to direct
requets to the right place.

Amos



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux