Search squid archive

Re: Redirects error for only some Citrix sites

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 18/07/2015 1:42 a.m., Laz C. Peterson wrote:
> Hello all,
> 
> Very weird issue here.  This happens to only select Citrix support articles.  (For example, http://support.citrix.com/article/CTX122972 <http://support.citrix.com/article/CTX122972> when searching Google for “citrix netscaler expired password” which is the top link in my results, or also searching for the same article directly on Citrix support site.)
> 
> This is a new install of Squid 3 on Ubuntu 14.04.2 (from Ubuntu repository).  When clicking the Google link, I get “too many redirects” error, saying that possibly the page refers to another page that is then redirected back to the original page.
> 
> I tried debugging but did not find much useful information.  Has anyone else seen behavior like this?
> 

The problem is the client fething URL X, gets a 30x redirect message
instructing it to contacts URL X instead (X being same URL X it *was*
fetching).

Usually that is a miconfiguration on the origin server itself. Fixable
only by the origin site authors. But there are also a few ways Squid can
play a part:

1) The 30x response pointing to itself (wrongly) really was generated by
the server and also explicitly stated that it should be cached [or you
configured Squid to force-cache].

Squid obeyed, and now you keep getting these loops. That will continue
until the cached content expires or is purged.


2) You are using Store-ID/Store-URL feature of Squid and did not check
that the URLs being merged were identical output. One of them produces a
302 redirect to X, which got cached. So now fetches for any URL in the
merged set (including the X itself) gets the cached 302 redirect back to X.
Again, that will continue until the cached content expires or is purged.


3) You are using a URL redirector that is generating the 302 response
loop. Usually redirectors with badly written (overly inclusive) regex
patterns causing similar behaviour to (2).


4) You are using a URL re-writer that is taking client request URL Y and
(wrongly) re-writing it to X. Squid fetches X from the backend server,
which replies with a redirect to Y (because Y != X). ... and loop.


5) You could be directing traffic using a cache_peer on port 80
regardless of http:// or https:// scheme received from the clients. If
the receiving peer/server emits a 302 for all traffic arriving on its
port 80 to a https:// URL this sort of loop happens. Its a slightly more
complicated form of (4), using a cache_peer equivalent of URL re-writer.
 The best fix for that is at the server. RFC 7230 section 5.5 has an
algorithm for what compliant servers should be doing. Squids job is to
relay the request and URL unchanged.

Amos

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux