Search squid archive

Re: Squid cache with SSL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey Amos,

 

I am not sure I understand the if and what are the risks of this subject.

From what I understand until now Google doesn’t use any DH concept on specific keys.

I do believe that there is a reason for the obviates ABORT.

The client is allowed and in most cases the software decides to ABORT if there is an issue with the given certificate.

The most obviates reason for such a case is that the client software tries to peek inside the “given” TLS connections and understand if it’s a good idea to continue with the session conditions.

 

I do agree that forced caching is a very bad idea.

However I do believe that there are use cases for such methods… only and only on a dev environment.

 

If Google or any other leaf of the network is trying to cache the ISP or to push into it traffic, the ISP is allowed by law..

to do what he needs to do to protect the clients.

I am not sure that there is any risk in doing so compared to what google did to the internet.

 

Just a scenario I have in mind:

If the world doesn’t really need google to survive like some try to argue,

Would an IT specialist give up on google? Ie given a better much more safe alternative?

 

I believe Google is a milestone for humanity, however, if no one understands the risks of the local Databases
and why these database exists and protected in the first place and why they shouldn’t be exposed to the public,

there is an opening for these who want to access these Databases.

 

Eliezer

 

----

Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: ngtech1ltd@xxxxxxxxx

 

From: Amos Jeffries
Sent: Monday, May 25, 2020 1:02 PM
To: squid-users@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: Squid cache with SSL

 

On 25/05/20 8:09 pm, Andrey Etush-Koukharenko wrote:

> Hello, I'm trying to set up a cache for GCP signed URLs using squid 4.10

> I've set ssl_bump:

> *http_port 3128 ssl-bump cert=/etc/ssl/squid_ca.pem

> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB

>

> sslcrtd_program /usr/lib/squid/security_file_certgen -s /var/lib/ssl_db

> -M 4MB

>

> acl step1 at_step SslBump1

>

> ssl_bump peek step1

> ssl_bump bump all*

 

The above SSL-Bump configuration tries to auto-generate server

certificates based only on details in the TLS client handshake. This

leads to a huge number of problems, not least of which is completely

breaking TLS security properties.

 

Prefer doing the bump at step3,

 

 

> *

> I've set cache like this:

>

> *refresh_pattern -i my-dev.storage.googleapis.com/.*

> 4320 80% 43200 override-expire ignore-reload ignore-no-store ignore-private*

> *

 

FYI: that does not setup the cache. It provides *default* parameters for

the heuristic expiry algorithm.

 

* override-expire replaces the max-age (or Expires header) parameter

with 43200 minutes from object creation.

  This often has the effect of forcing objects to expire from cache long

before they normally would.

 

* ignore-reload makes Squid ignore requests from the client to update

its cached content.

This forces content which is stale, outdated, corrupt, or plain wrong

to remain in cache no matter how many times clients try to re-fetch for

a valid response.

 

* ignore-private makes Squid content that is never supposed to be shared

between clients.

To prevent personal data being shared between clients who should never

see it Squid will revalidate these objects. Usually different data will

return, making this just a waste of cache space.

 

* ignore-no-store makes Squid cache objects that are explicitly

*forbidden* to be stored in a cache.

  80% of 0 seconds == 0 seconds before these objects become stale and

expire from cache.

 

Given that you described this as a problem with an API doing *signing*

of things I expect that at least some of those objects will be security

keys. Possibly generated specifically per-item keys, where forced

caching is a *BAD* idea.

 

I recommend removing that line entirely from your config file and

letting the Google developers instructions do what they are intended to

do with the cacheability. At the very least start from the default

caching behaviour and see how it works normally before adding protocol

violations and unusual (mis)behvaviours to how the proxy caches things.

 

 

> *

> In the cache directory, I see that object was stored after the first

> call, but when I try to re-run the URL I get always

> get: *TCP_REFRESH_UNMODIFIED_ABORTED/200*

 

What makes you think anything is going wrong?

 

Squid found the object in cache (HIT).

The object requirements were to check with the origin server about

whether it could still be used (HIT becomes REFRESH).

The origin server said it was fine to deliver (UNMODIFIED).

Squid started delivery (status 200).

The client disconnected before the response could be completed delivery

(ABORTED).

 

Clients are allowed to disconnect at any time, for any reason.

 

 

Amos

_______________________________________________

squid-users mailing list

squid-users@xxxxxxxxxxxxxxxxxxxxx

http://lists.squid-cache.org/listinfo/squid-users

 

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux