On 07/10/10 00:53, mohd hafiz wrote:
On Wed, Oct 6, 2010 at 6:54 PM, Amos Jeffries<squid3@xxxxxxxxxxxxx> wrote:
On 06/10/10 22:55, mohd hafiz wrote:
---------- Forwarded message ----------
From: mohd hafiz<bmhafiz@xxxxxxxxx>
Date: Wed, Oct 6, 2010 at 5:17 PM
Subject: Re: URL redirection in offline mode
To: Amos Jeffries<squid3@xxxxxxxxxxxxx>
thanks for fast respon,
my squid will have to operate in network up and down. it will just do
normal operation when the network is up. When the network is down,
squid will intercept all request from client and point it to local
server. i write a perl script to do the redirection.
No need for that. redirect is automatic by prefer_direct. The local server
just needs to accept the random domains passed to it by Squid.
you means i did not need the perl script? i used the perl script in
url_rewrite program /etc/squid/redirect.pl.
Yes I mean you don't need to do that. See below.
i have configure my cache_peer to:
cache_peer example.com 3128 3130 default
example.com being your "local server". Is that another proxy or a web
server? The answer will determine whether you use port 3128/3130 or 80/0.
example.com is my web server
Then you need to use:
cache_peer example.com 80 0 default
and enable
prefer_direct on
but the browser still tried to reach the internet. it takes a few
minutes to resolve to my local page. any advised?
Are you doing WCCP, NAT interception or transparent proxy?
i'm doing transparent proxy
If yes,
the browser will be attempting and failing its own DNS to go direct to the
Internet Squid cannot help here. Connectivity failover with a proxy is not
easily compatible with interception.
is squid cannot function when the network is down? i know that squid
Due to transparent proxy, the problem is not in Squid.
Right at the very start the web browser does its own DNS thinking it
has to contact the Internet itself. This first DNS fails and the browser
presents the "unable to resolve" page. The request never gets near Squid
to pass to the peer.
If you have a local resolver which is still relaying results to the
browser the request will possibly get to Squid where things continue as
desired. Any DNS delay will be doubled or tripled.
Configure the browser to pass requests directly to the proxy. The
browser will then start by passing the request to Squid. From there your
Squid failover config has control.
will do a dns lookup at the startup. squid will not start if it fails
dns lookup. i have tried to disable the internal dns lookup and still
problem exists. is there any way to solve this?
Disabling the internal DNS resolver only switches to using an old slower
external resolver process.
Squid requires DNS when it has to process a squid.conf entry from a name
to an IP. Such as names in src/dst ACLs. Using IP addresses there will
help avoiding DNS on startup.
Older Squid needed the -D command line option to prevent tests of the
configured nameservers. And visible_hostname to prevent looking up its
hostname rDNS.
Newer Squid have dropped the -D and related tests, and their
visible_hostname will failover to "localhost" instead of stopping.
i also do a setup as below:
i have write a shell script to ping the network and write the status
to a text file. the perl script will read the text file
and do the redirection based on the output from the text file.
Sending a packet (ping echo, or TCP open connection) to somewhere
outside will produce a success/fail result. The NIC card whose cable got
disconnected produces a ICMP message to notify that another route must
be used. Squid probes and receives these for each and every request.
It's identical to what your perl script and text files do, but happens
automatically within nanoseconds of any network change.
Amos
--
Please be using
Current Stable Squid 2.7.STABLE9 or 3.1.8
Beta testers wanted for 3.2.0.2