Search squid archive

Re: what are the Pros and cons filtering urls using squid.conf?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/9/2013 8:28 PM, Squidblacklist wrote:
On Sun, 09 Jun 2013 20:05:53 +0300
Eliezer Croitoru <eliezer@xxxxxxxxxxxx> wrote:

On 6/9/2013 6:59 PM, Alex Rousskov wrote:
On 06/09/2013 03:29 AM, Eliezer Croitoru wrote:

Would you prefer a filtering based on a reload or a persistent DB
like mongoDB or tokyo tyrant?

I would prefer to improve Squid so that reconfiguration has no
disrupting effects on traffic, eliminating the "reload is
disruptive for Squid but not for my ICAP service" difference.

There are many important differences between ACL lists, eCAP
adapters, and ICAP services. Reconfiguration handling should not be
one of them.


Cheers,

Alex.

So our aim is to improve squid reload!
perfect.
This is what I do want entirely.
The main issue is that static squid.conf cannot comply with the
demand to allow DB update on the fly.

If it would be possible squid will move forward to a very very good
point in the development.
The above is a good point in every software.

I will give you the given scenario:
Filtering solution(not sure 100% if it should be based on squid)
Human based filtering DB of pictures domains and pages
A very strict client that wants on the fly filtering(one client allow
first and block later the second is block first and allow later)
In this scenario we have rating of -128 and +128(int32)
the light filtering will be -51 which allow first and later disallow.
0 should block first and then allow after human or computer inspection

The above scenario is a real world scenario which my friend developed
and designed a proxy and other helpers to make our children world a
cleaner world.
I myself not really a fan of "kids shouldn't see" etc but I do
understand why people do want it and make big efforts to make it
happen.

What do you think about the idea?

Eliezer



Im all in favor of improving the reload process for squid. however.

If you have 100% cpu usage when using large acls with squid proxy there
is a problem with your installation or you are using an ancient version
of squid that needs to be rm'd.

Cpu usage with large acls in squid3.x is nominal and not a problem at
all.



-
Signed,

Fix Nichols

http://www.squidblacklist.org

So let me understand.
1 Million dstdoms and urls should not be a problem?
I can write a script the converts blacklists into squid.conf\acls files.
These lists wont be as complex and good as an updated one but can almost say goodby to a base squidguard setup while leaving squid as a static(1 day updated) filtering solution.

Eliezer




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux