Hallo, Amos, Du meintest am 18.05.13: [...] >> The squidguard job is working with a really big blacklist. And >> working with some specialized ACLs. > Which apart from the list files, is all based on received information > sent to it by Squid. >> I know "squid" can do this job too - and I maintain a schoolserver >> which uses many of these possibilities of "squid". But then some >> other people has to maintain the blacklist. That's no job for the >> administrator in the school. > You are the first to mention that change of job. > The proposal was to: > * make Squid load the blacklist > * remove SG from the software chain > * watch response time improve ? > Nowhere in that sequence does it require any change of who is > creating the list. But that's one of the major problems for a user of any blacklist: who maintains this blacklist. That's no squid job, of course. > At most the administrator may need to run a tool to convert from some > strange format to one Squid can load. (FWIW: both squidblacklists.org > and Shalla provide lists which have already been converted to > Squid-compatible formats). Hmmm - sounds interesting. [...] > Note that we have not even got near discussing the content of those > "regex" lists. I've seen many SquidGuard installations where the > rationale for holding onto SG was that squid "can't handle this many > regex". And at least for such purpose as a schoolserver that's a valid objection ... A teacher has to teach pupils, not to build regular expressions for a machine. > Listing 5 million domain names in a file with some 1% having > a "/something" path tacked on the end does not make it a regex list. > ** split the fie into domains and domain+path entries. Suddenly you > have a small file of url_regex, a small file of dstdom_regex and a > long list of dstdomain ... which Squid can handle. Yes - I know. But that sounds more like a theory, not like a downloadable solution. And again: who maintains this solution? Viele Gruesse! Helmut