Search squid archive

Re: Squid cache with SSL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 25/05/20 8:09 pm, Andrey Etush-Koukharenko wrote:
> Hello, I'm trying to set up a cache for GCP signed URLs using squid 4.10
> I've set ssl_bump:
> *http_port 3128 ssl-bump cert=/etc/ssl/squid_ca.pem
> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
> 
> sslcrtd_program /usr/lib/squid/security_file_certgen -s /var/lib/ssl_db
> -M 4MB
> 
> acl step1 at_step SslBump1
> 
> ssl_bump peek step1
> ssl_bump bump all*

The above SSL-Bump configuration tries to auto-generate server
certificates based only on details in the TLS client handshake. This
leads to a huge number of problems, not least of which is completely
breaking TLS security properties.

Prefer doing the bump at step3,


> *
> I've set cache like this:
> 
> *refresh_pattern -i my-dev.storage.googleapis.com/.* 
> 4320 80% 43200 override-expire ignore-reload ignore-no-store ignore-private*
> *

FYI: that does not setup the cache. It provides *default* parameters for
the heuristic expiry algorithm.

* override-expire replaces the max-age (or Expires header) parameter
with 43200 minutes from object creation.
  This often has the effect of forcing objects to expire from cache long
before they normally would.

* ignore-reload makes Squid ignore requests from the client to update
its cached content.
 This forces content which is stale, outdated, corrupt, or plain wrong
to remain in cache no matter how many times clients try to re-fetch for
a valid response.

* ignore-private makes Squid content that is never supposed to be shared
between clients.
 To prevent personal data being shared between clients who should never
see it Squid will revalidate these objects. Usually different data will
return, making this just a waste of cache space.

* ignore-no-store makes Squid cache objects that are explicitly
*forbidden* to be stored in a cache.
  80% of 0 seconds == 0 seconds before these objects become stale and
expire from cache.

Given that you described this as a problem with an API doing *signing*
of things I expect that at least some of those objects will be security
keys. Possibly generated specifically per-item keys, where forced
caching is a *BAD* idea.

I recommend removing that line entirely from your config file and
letting the Google developers instructions do what they are intended to
do with the cacheability. At the very least start from the default
caching behaviour and see how it works normally before adding protocol
violations and unusual (mis)behvaviours to how the proxy caches things.


> *
> In the cache directory, I see that object was stored after the first
> call, but when I try to re-run the URL I get always
> get: *TCP_REFRESH_UNMODIFIED_ABORTED/200*

What makes you think anything is going wrong?

 Squid found the object in cache (HIT).
 The object requirements were to check with the origin server about
whether it could still be used (HIT becomes REFRESH).
 The origin server said it was fine to deliver (UNMODIFIED).
 Squid started delivery (status 200).
 The client disconnected before the response could be completed delivery
(ABORTED).

Clients are allowed to disconnect at any time, for any reason.


Amos
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux