Search squid archive

Re: Re: Squid 3.2.2 + localhost

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 23.10.2012 23:08, alberto.desi wrote:
Yes, I've read the mail but I think that it is better to post it here... ;-)
The main problem is that if I rewrite the $url (localhost [::1], or
[5001::52] [3001::52] that are the addresses of the interfaces) it doesn't
work. If I rewrite $url with "302:" code in front it works...

Because that is NOT re-write. That is redirect - ie how HTTP is designed to operate, and it works far better than re-writing hacks do.

You can redirect using 301, 302, 303, or 307 status codes depending on the wanted behaviour from the client when its handling the redirect. They offer a matrix for temporary versus permanent change of URL (eg update bookmarks and cached reference content), and to alter the method to GET versus retain the existing one when passing the request to the new URL.

You should only ever need to re-write the URL if you are altering part of the path segment or query string parameters. The only time when you may need to re-write the URL to a different destination is when

To simply *route* the requests via a different upstream server use a cache_peer directive to setup a fixed TCP connection to the server.

but the
behavior of the system is completely different.
Replying to your requests, I'm working on 4 virtual machine called origin, core, mobile1, mobile2. In the origin and mobile I have apache servers running... those are my caches!!! When I write localhost, I want to redirect
to the apache link

Aha. Thank you. That was one of the confusing things - your names for the machines are not aligning with the common networking terminology for what they do.

Because Squid is a *type* of software called a 'cache/caching proxy' and Apache is a *type* called 'origin server'. 'localhost' is ::1 or 127.0.0.1.



example:
I receive GET for http://[6001::101]/Big/big1.avi and I want to rewrite it like http://[5001::52]/Big/big1.avi, that is the link to the apache in the
same machine where Squid is installed. This is not working. But if I
redirect to another machine http://[4001::52]/Big/big1.aviwith apache it
works.

OK?

Okay. Start with forgetting re-write and redirect. What you are doing is HTTP routing.

Which means you configure cache_peer for each of your Apache servers. Do something to identify where the request is supposed to go. And have Squid relay the request there. No need for changing it in any way.


To identify where to send it you have your script.

Use the external_acl_type helper interface to call your Script. This does three important things:
 1) offers you far more parameters than the old url_rewrite interface.
 2) can be called at any time in the ACL processing chain
3) provides tagging and a few other details feedback from the helper to Squid.


I will leave you the study of figuring out what external_acl_type % format codes are needed by your helper. Here is the documentation: http://www.squid-cache.org/Doc/config/external_acl_type/

I suggest instead of sending back an altered URL send back "ERR" for no change, and "OK tag=X" for a change, with X being a tag assigned to identify one of the Apache (could be the Apache IP for example).

Then add something like the following to squid.conf:

  external_acl_type whichServer ...
  acl findServer external whichServer

  # allow IF and only if we have a backend to send it to
  http_access allow findServer
  http_access deny all

# check if your helper sent "OK tag=4001" and pass it to server [4001::101]
  acl apache4001Okay tag 4001
  cache_peer [4001::101] parent 80 0 ... name=Apache4001
  cache_peer_access allow Apache4001 allow tag4001
  cache_peer_access allow Apache4001 deny all

# check if your helper sent "OK tag=5001" and pass it to server [5001::101]
  acl apache4001Okay tag 5001
  cache_peer [5001::101] parent 80 0 ... name=Apache5001
  cache_peer_access allow Apache5001 allow tag5001
  cache_peer_access allow Apache5001 deny all


Then you just have to check your backend Apache are setup to handle the client requests which they will receive exactly as if the client was contacting them directly - with all client TCP and HTTP level details unchanged (ie fully transparent proxy).

Amos


*[rewriter_code]*
#!/usr/bin/perl

use warnings;
use strict;
use Fcntl ':flock';

require '/home/alberto/NodeConfig.pm';                         	  #
dir/.../NodeConfig.pm
my $dirDB = ($NodeConfig::dir_DB);                                #
directory local database
my $db_name = ($NodeConfig::name_DB); # name of
local database
my $node_address = ($NodeConfig::node_address); # MAR's
address
my $DM_address = ($NodeConfig::DM_address); # DM's
address
my $dir_apache = ($NodeConfig::dir_Apache);                       #
directory of contents (Apache server)
my $dir_DM = ($NodeConfig::dir_DM);                               #
directory where there is the DM_req.pl in DM
my $rootpwd = ($NodeConfig::root_pwd); # password
for root access (to send the request to DM)

$|=1;


#-------------------------------------------------------------------------------------------------------------
#------------------------ PARAMETERS (modifying only ip address oCDN)
----------------------------------------

while (<>) {

                          my @params_http = split;

# parameters of http request
my $url = $params_http[0]; # url of
http request
my $ip_client = $params_http[1]; # ip client oh http
request

                          my $absTime = time();

# absolute time in seconds

                          my $db_name = ($NodeConfig::name_DB);
my $node_address = ($NodeConfig::node_address);
                          my @copie;

#----------------------------- REWRITE URL SQUID FUNCTION
----------------------------------------------------
# Check if there is the content inside he cache:
                      # if YES --> Go directly to MAR's cache
# if NO --> Forward the request to DM and wait the
best cache or Origin path


open(LIST1, "< $dirDB"."$db_name"); # open local database for
READ
           flock ( LIST1, LOCK_SH );
           my @copieS=<LIST1>;
           flock ( LIST, LOCK_UN );
           close(LIST1);
           my $c;
           for $ind(0..$#copieS) {
                      my @values = split(';', $copieS[$ind]);
                      my $original =  $values[0];
                      my $copy = $values[1];
                      my $iT1 = $values[2];

# seeking in the datbase if the content is in the
cache
                      if (($url eq $original)and($iT1 eq "Y")) {


my @val1 = split('/', $url);
                                 if (-e
"$dir_apache"."$val1[3]"."/"."$val1[4]") {


my $newURL = "$val1[0]//$node_address/$val1[3]/$val1[4]";

								print "$newURL\n";
								#print "302:"."$newURL\n";
								exit;
                                 }
                      }
                          }
# request to DM for best position of content (Origin or others MARs)
			my $req = request($DM_address,$dir_DM,$url,$node_address);

			print "$req\n";
}


#--------------------------------------- END rewriteurl.pl
---------------------------------------------------


#------------------------------------ Soubroutine to Forward the request to
DM -------------------------------

# subroutine to send the request for best position to DM (ssh call)
sub request {

           my $DM_address = $_[0];
           my $dir_DM = $_[1];
           my $url = $_[2];
           my $node_address = $_[3];


           my $length = length($DM_address);
           my $DM_addressSSH = substr($DM_address,1,$length-2);
           my $req_DM = "sshpass -p '$rootpwd' ssh -o
StrictHostKeyChecking=no root@"."$DM_addressSSH"." \'cd $dir_DM && ".'perl
'."$dir_DM".'DM_req.pl'." $url $node_address\'";
           $req_DM = `$req_DM`;

           return $req_DM;






--
View this message in context:

http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-2-2-localhost-tp4657098p4657103.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux