Search squid archive

Re: Ignoring query string from url

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Henrik,

url rewrite helper script works fine for few requests ( 100 req/sec )
but slows down response as number of requests increase and it takes
10+ second to deliver the objects.

Is there way to optimise it further ?

url_rewrite_program  /home/zdn/bin/redirect_parallel.pl
url_rewrite_children 2000
url_rewrite_concurrency 5

Regards
Nitesh

On Thu, Oct 30, 2008 at 3:16 PM, nitesh naik <niteshnaik@xxxxxxxxx> wrote:
> There was mistake on my part I should have used following script to
> process concurrent requests. Its working properly now.
>
> #!/usr/bin/perl -an
> BEGIN { $|=1; }
> $id = $F[0];
> $url = $F[1];
>       $url =~ s/\?.*//;
>       print "$id $url\n";
>       next;
>
> Regards
> Nitesh
>
> On Thu, Oct 30, 2008 at 12:15 PM, nitesh naik <niteshnaik@xxxxxxxxx> wrote:
>> Henrik,
>>
>> With this approach I see that only one redirector process is being
>> used and requests are processed in serial order. This causes delay in
>> serving the objects and even response for cache object is slower.
>>
>> I tried changing url_rewrite_concurrency to 1 but with this setting
>> squid is not caching the Object. I guess I need to use url rewrite
>> program which will process requests in parallel to handle the load of
>> 5000 req/sec.
>>
>> Regards
>> Nitesh
>>
>> On Mon, Oct 27, 2008 at 5:18 PM, Henrik Nordstrom
>> <henrik@xxxxxxxxxxxxxxxxxxx> wrote:
>>> See earlier response.
>>>
>>> On mån, 2008-10-27 at 16:59 +0530, nitesh naik wrote:
>>>> Henrik,
>>>>
>>>> What if I use following code ?  logic is same as your program ?
>>>>
>>>>
>>>> #!/usr/bin/perl
>>>> $|=1;
>>>> while (<>) {
>>>>     s|(.*)\?(.*$)|$1|;
>>>>     print;
>>>> next;
>>>> }
>>>>
>>>> Regards
>>>> Nitesh
>>>>
>>>> On Mon, Oct 27, 2008 at 4:25 PM, Henrik Nordstrom
>>>> <henrik@xxxxxxxxxxxxxxxxxxx> wrote:
>>>> >
>>>> > Sorry, forgot the following important line in both
>>>> >
>>>> > BEGIN { $|=1; }
>>>> >
>>>> > should be inserted as the second line in each script (just after the #! line)
>>>> >
>>>> >
>>>> > On mån, 2008-10-27 at 11:48 +0100, Henrik Nordstrom wrote:
>>>> >
>>>> > > Example script removing query strings from any file ending in .ext:
>>>> > >
>>>> > > #!/usr/bin/perl -an
>>>> > > $id = $F[0];
>>>> > > $url = $F[1];
>>>> > > if ($url =~ m#\.ext\?#) {
>>>> > >         $url =~ s/\?.*//;
>>>> > >         print "$id $url\n";
>>>> > >         next;
>>>> > > }
>>>> > > print "$id\n";
>>>> > > next;
>>>> > >
>>>> > >
>>>> > > Or if you want to keep it real simple:
>>>> > >
>>>> > > #!/usr/bin/perl -p
>>>> > > s%\.ext\?.*%.ext%;
>>>> > >
>>>> > > but doesn't illustrate the principle that well, and causes a bit more
>>>> > > work for Squid.. (but not much)
>>>> > >
>>>> > > > I am still not clear as how to write
>>>> > > > help program which will process requests in parallel using perl ? Do
>>>> > > > you think squirm with 1500 child processes  works differently
>>>> > > > compared to the solution you are talking about ?
>>>> > >
>>>> > > Yes.
>>>> > >
>>>> > > Regards
>>>> > > Henrik
>>>
>>
>


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux