Hi, On 18/08/2016 6:32 a.m., Amos Jeffries wrote: >> I imagine layouts where the encrypted traffic itself gets stored > no way for Squid to know if a previous encrypted stream is reusable. > To squid it is just a random stream of opaque bytes. Enlightening! The idea was that omitting decryption but instead providing measures to do so from the client's side had less concerns wrt privacy. As I see, with a mere "stream of opaque bytes", any "handle" to provide such measures is missing. Thus, if caching of SSL-encrypted data is wished, decryption is mandatory. >> I.e. [HTTPS] not cacheable at all [in 2.7.s9]? > Correct. Asking months earlier here would have saved me painful failures ... >> I prefer not to erase objects [...] My [TAGs] may look horrible >> authenticate_ttl 359996400 seconds > Lookup "credentials replay attack". [...] There is no other use for > this directive than attacking your clients. Uugh! Was set in april 2012, by error (without effect in 2.5.s8_OS2_VAC, thus it didn't harm): the idea was to turn off Squid's garbage collection, in order to avoid wearing out flash memory. Wrong place, and I ignored credentials ... >> hierarchy_stoplist cgi-bin >> refresh_pattern -i /cgi-bin/ 5258880 100% 5258880 > Please use the pattern settings: "-i (/cgi-bin/|\?) 0 0% 0" > This pattern is designed to work around an issue with truly ancient CGI > scripts [...] Such scripts are guaranteed to be dynamically changing [...] The idea comes from http://twiki.cern.ch/twiki/bin/view/CMS/MyOwnSquid , to get dynamic web pages cached. I am glad that Squid finally does so! Conflicting concepts, as it seems. Or, is there any RegEx which does the old CGI script-workaround but still caches content with "?" in URLs? >> refresh_pattern . 5258880 100% 5258880 >> override-expire override-lastmod ignore-no-cache ignore-private >> positive_dns_ttl 359996400 seconds > Meaning whenever any domain name moves hosting service you get cutoff > from it completely for ~11 years or until you fully restart Squid. Yes, I noticed this :-) (Used to reconfigure Squid from CacheMgr in these cases). Came from Sjoerd Visser's dutch page on Squid 1.1, to work around it's missing offline_mode TAG (just kept positive_dns_ttl afterwards): http://vissesh.home.xs4all.nl/multiboot/firewall/squid.html > When you go to Squid-3, removing both these DNS settings entirely would > be best. [...] if you really insist [...] ~48hrs should work just as well Truely. When I set up 2.5.s8_OS2_VAC six years ago, I just added a few new TAGs to my old 1.1 config. Only this summer, I spent a couple of days to migrate previous settings into the new order of a fresh 2.7.s9's config (for better comparison), now comprising a history of all available OS/2 builds (introduction/ disappearance of features etc.). To be re-done with 3.5. >> setup is that robust that force-reload [fails unless objects deleted] > This is done solely by "override-expire". Perfect. I'm far from knowing config TAGs by heart and thus don't see how things play together. Enabling "overide-expire" in 2012 was a bad thing. >> [setup that robust that] PURGE fails unless [objects deleted manually] > In Squid-2 this is more efficiently done by: > acl PURGE method PURGE > http_access deny PURGE Recently enabled (as well as CacheMgr-access) by setting acl localnet src 192.168.0.160/27 [...] acl purge method PURGE http_access allow purge localnet http_access allow purge localhost http_access deny purge > Squid-3 [...] disables all the PURGE functionality *unless* you have > "PURGE" method in an ACL like above. It is a good idea for performance > reasons to remove all mention of "PURGE" when upgrading to Squid-3. Permitting PURGE has a performance impact? I enabled it recently, but since reload works now, it could be suppressed. >> [force-reload fails unless] the ugly "reload_into_ims on" option >> is set which violates standards. > reload_into_ims is not a violation of current HTTP/1.1 standards. [...] > The big(est) problem with it is that Squid-2 is HTTP/1.0 software and > both reload and IMS revalidation are not defined in that version of the > protocol. Adding two risky undefined things together multiplies dangers. > [...] Overall the approach to caching *at any cost* is doing more harm > than good, both to yourself and to many others. Disquieting. In fact, I tried to change Squid's default caching behaviour to "accumulating" content ("once here, why to reload redundant stuff?"). The second, important intention behind this is not to wear out the flash memory where Squid runs on. Data is backed up regularily, but I'm afraid of, e.g., the regular revalidation processes's write accesses. (A friend lost a solid state disk with a Squid cache after only six months.) Suggestions for a proper compromise welcome. Regards Torsten Kuehn, Weil am Rhein/ Basle -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-2-7-s9-HTTPS-proxying-hint-welcome-tp4678986p4679041.html Sent from the Squid - Users mailing list archive at Nabble.com. _______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users