On 06/01/16 00:04, Amos Jeffries wrote: > Yes. Squid always has been able to given enough RAM. Squid stores most > ACLs in memory as Splay trees, so entries are sorted by frequency of use > which is dynamically adapted over time. Regex are pre-parsed and > aggregated together for reduced matching instead of re-interpreted and > parsed per-request. Great to hear. I've got some 600,000+ domain lists (ie dstdomain) and 60,000+ url lists (ie url_regex) acls, and there are a couple of "gotchas" I've picked up during testing 1. at startup squid reports "WARNING: there are more than 100 regular expressions. Consider using less REs". Is that now legacy and ignorable? (should that be removed?). Obviously I have over 60,000 REs 2. making any change to squid and restarting/reconfiguring it now means I'm seeing a 12sec outage as squid reads those acls off SSD drives/parses them/etc. With squidguard that outage is hidden because squidguard uses indexed files instead of the raw files and that parsing/etc can be done offline. That behavioral change is pretty dramatic: making a minor, unrelated change to squid now involves a 10+sec outage (instead of <1sec). I'd say "outsourcing" this kind of function to another process (such as url_rewriter or ICAP) still has it's advantages ;-) -- Cheers Jason Haar Corporate Information Security Manager, Trimble Navigation Ltd. Phone: +1 408 481 8171 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 _______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users