Search squid archive

Re: Failed to select source for - Unable to forward this request at this time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 15/12/2011 11:46 p.m., karj wrote:
Hi list,
I am using Squid as HTTP accelerator, (Squid Cache: Version 2.7.STABLE9)
and I've got the following problem-Error:

ERROR
The requested URL could not be retrieved

While trying to retrieve the URL:
http://swww.mgmt.example.com/themes/1/default/Media/Home/sylloges.arrow.righ
t.active.gif

The following error was encountered:

     * Unable to forward this request at this time.

This request could not be forwarded to the origin server or to any parent
caches. The most likely cause for this error is that:

     * The cache administrator does not allow this cache to make direct
connections to origin servers, and
     * All configured parent caches are currently unreachable.

This error means exactly what its text says.

At the cahe.log I've got the following message

Failed to select source for
'http://swww.mgmt.example.com/themes/1/default/Media/Home/sylloges.arrow.rig
ht.active.gif'
2011/12/15 12:14:48|   always_direct = 0
2011/12/15 12:14:48|    never_direct = 0
2011/12/15 12:14:48|        timedout = 0

At the access.log I've got
504 1648 TCP_MISS:NONE - - [15/Dec/2011:12:37:31 +0200] "GET
http://swww.mgmt.example.com/themes/1/default/Media/Home/sylloges.arrow.righ
t.active.gif HTTP/1.1" "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB;
rv:1.9.2.24) Gecko/20111103 Firefox/3.6.24 GTB7.1"


But I am sure that the URL is reachable in backend web server.

The error message happening means otherwise.


The configuration options for the above URL are:


refresh_pattern
example\.com\/(.*)\.(jpg|gif|png|jpeg|css|js|axd|bmp|ico|swf) 1440 1000%
4320 reload-into-ims override-expire ignore-no-cache ignore-private
override-lastmod ignore-reload
refresh_pattern example\.com\/image\.limg 1440 1000% 4320 reload-into-ims
override-expire ignore-no-cache ignore-private override-lastmod
ignore-reload
# First page 5 min
refresh_pattern -i .example\.com\/$ 5 100% 6 reload-into-ims override-expire
ignore-no-cache ignore-private override-lastmod ignore-reload
# Articles page 6 min
refresh_pattern -i .example\.com\/(.*)\/\?aid= 6 100% 6 reload-into-ims
override-expire ignore-no-cache ignore-private override-lastmod
ignore-reload
# ArticleLIST Pagination page 60 min (articlelist/?pg=)
refresh_pattern -i .example\.com\/(.*)\/articlelist\/\?pg= 60 100% 70
reload-into-ims override-expire ignore-no-cache ignore-private
override-lastmod ignore-reload
refresh_pattern -i example\.com 1440 0% 4320  reload-into-ims
override-expire ignore-no-cache ignore-private override-lastmod
ignore-reload

I see a lot of nasty hacks going on in there. You are going to enjoy the Surrogate-Control feature of Squid-3 when you upgrade.

Although why you would want to ignore and cache "private" objects in a reverse proxy (which is shared by the entire Internet by definition) is beyond me.


#--- CACHE_PEER farm example start ----
cache_peer  xxx.xxx.xxx.xxx parent 80 0 no-query no-digest no-netdb-exchange
originserver name=new_servers login=PASS
acl www.mgmt.example.com_site dstdomain .exammple.com
http_access allow www.mgmt.example.com_site
cache_peer_access new_servers allow www.mgmt.example.com_site
cache_peer_access new_servers deny all
#--- CACHE_PEER farm example end ----

icp_port 3130
icp_hit_stale off
log_icp_queries off
icp_query_timeout 500 #MILI SECS
visible_hostname KASANDRA.example.com
cache_peer xxx.xxx.xxx.xxx sibling 80 3130 no-netdb-exchange no-delay
no-digest proxy-only
cache_peer xxx.xxx.xxx.xxx sibling 80 3130 no-netdb-exchange no-delay
no-digest proxy-only

I've the same configuration options on both siblings, all my servers have
the same Problem.


I've googled my problem and found that this happens when you run out of file
descriptors which is not my case
        squidclient -p80 -h localhost mgr:info | grep 'file descri'

        Maximum number of file descriptors:   65534
         Available number of file descriptors: 62511
         Reserved number of file descriptors:   100

The filedescriptors is a only one of many paths which lead to a peer not responding. It is also only relevant on the server to which connections are failing (ie if the backend was a Squid which had run out of descriptors). You are "dangerously close" to running out however.

"The Problem" is that Squid detected the peer down, but I think has no way to detect it has become available again.

Squid requires 10 failed requests to detect a peer down, and just one to detect it up again. These may be HTTP requests (normally none happen while the peer is in 'DEAD' state), or ICP queries (disabled by no-query), or NetDB checks (disabled by no-netdb-exchange), or cache digest exchanges (disabled by no-digest), or ICMP pings (probably disabled as well yes?).

To resolve that, in the case where the peer actually cant do any of the services which are disabled, you can use the "default" option on the origin server cache_peer line. That will make Squid attempt to use it for HTTP request before declaring the error even if it is thought to be dead.


Amos


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux