Search squid archive

Re: Squid 5: server_cert_fingerprint not working fine...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey Fred,

 

First take into account that to the Squid-Users no question is ridicules at any time!!!!

There are couple sides for a forum/list and only one of them is the technical one.

I was told by a good mentor of me the next sentence:

Think about the other side of the conversation on the line like your own son or daughter.

Sometimes a father or a friend are saying things because they truly love you. The only reason you might not understand the comparison you are doing is probably since you are not used to do specific tasks.

And I will try to make sense of both sides.

Think about squid about the “nice” kid in the neighborhood.

Squid is almost the only proxy software out there that is trying to give a solution for everything and every use case for a very long time.

It is more complicated to support everyone then writing a proxy that does one specific thing very good.

 

Squid is here since 1985~ and will not be taken down easily and will kick in the web a long ahead compared to others.

This is despite it not being the most advertised software out there.

The original goal of Squid-Cache was achieved and not just in a standard way but with a really good success and every piece of the Internet was and is still affected by it.

 

Sometimes I would answer my kid with a really rough answer (compared to what I believe should have been a softer one) but my intent is good.

 

This is a place where we would like you Fred to get a grasp of what is happening and to help you become better and only one of the tools to get there is coding.

The Squid-Cache project tried and tries to make the proxy smart as possible and to not harm while doing it’s job/purpose.

 

For example, ufdbguard (which is a great piece of code which deserves honor and respect) does destination tls probing.

The overhead for that is that eventually (if there is no caching involved) per CONNECT request there is another tls/tcp request that is going to hit the hosting service.

For a single connection of a home user it’s fine.

For a single connection of a home user doubles an ISP it’s not fine…

I will call this as it is, an “amplification attack”.

Which means that every single action hits in power of 2 or more.

Squid tries to avoid this as much as possible.

 

I would try to give you a sense of the issue in a human level.

Assume that there is a shop in the street that sell .. let say fruits.

One customer comes and asks to taste something small from the store before buying. As a good and nice shop owner he lets him do so.

Then a bunch of clients comes and take each a fruit to “taste” and goes off without buying anything.

The end result is that the whole sector of this specific fruit is gone and the owner left with an open mouth shocked and.. robbed.

 

Take into account that every single TLS connection consumes more CPU then a single linux kernel “sendfile” operation.

To make more sense of it, a CPU is merely some form of programmable electrical resistor.

If you would know something in electronics(these days called electricity engineering) there is a specific life span to a resistor and by definition to any electrical component.

 

I believe that you don’t want to harm every CPU in every site your clients are visiting.

Every HIT costs and one of the goals of Squid-Cache is to reduce these HITs on the origin service and to offload some of the HIT rate from the origin server and to allow the local or regional ISP take the HIT on itself since it’s more reasonable for a group of clients to take the HIT for their demand for any communication consumption.

 

For some reason else then maybe Microsoft the Gaming industry and couple other big players nobody wants to be a part of this.

And it’s not that I don’t understand why many goes after the TLS everywhere and couple other communication trends but it’s mostly because of an un-justified fear.

TLS is not security but rather it’s a burden on the service providers and shop owners, it’s actually a “protection” mechanism.

So I would just say that to protect the data you don’t need to protect every piece of the shop but rather make an effort to block the “escape” routes or “leak” routes.

 

What you want to achieve is good but what I am trying to say is that you must have both brains and sensitivity to others on the Internet and if you are probing the services on the Internet you should write a good caching layer that will lower the power of your amplification attack from power of 2 to a lower ratio since probably the signature of a SNI+ip+port or NO-SNI+ip+port would stay the same for at-least 12 hours and with let-encrypt around the cert is valid and fresh for 30~ days for a domain star certificate.

 

So to summarize:

I didn’t touch every piece of technical aspect of the SNI+IP+PORT TLS probing solution but I hope it’s enough to make some sense into caching with redis(for example) the results of this probing.

 

And if you would ask any standard and non-greedy client if they would prefer to get good security without harming the service provider, they would surly answer they would be ok waiting an extra couple milliseconds for the caching layer to get a good answer like from a MariaDB table(layer 2) compared to a Redis(layer 1).

 

And I just must tell you something:

I am religious person and we have a very old story about a person who wanted to get to Jerusalem. (in the old days)

He got to a place near Jerusalem where you cannot see the city because of the hills and also because Jerusalem(old one) is actually a very low mountain compared to couple others in the area.

In this place the kids of Jerusalem were playing and wondering around and he have seen one kid (the kids of Jerusalem are known for their wisdom).

He then asked him for directions to Jerusalem and the kid told him the next:

You have from here two ways to Jerusalem, the short long and the long short.

He assumed that the short long is the shorter… but..

He started walking this short long way and it was a nice and smooth route but as he started seeing Jerusalem, he started to stumble thorns and bushes. After fighting the … lost cause to reach Jerusalem he got back to the intersection of roads where he met the boy.

He then asked him: Why did you told me that this is the short way? It’s the worst way to get to Jerusalem!!!!

The kid then answered him: I told you there are two ways.

There is a short and long while the other is long but short.

He then tried the other way understanding the basic wisdom of the kids of Jerusalem.

Indeed the “long short” way started rough but not as rough as the other rough part of the “short long” way.

After a while when he was a bit exhausted from the road he saw Jerusalem so close he felt he was already there and as he saw this he gets a relief since from this point the terrain became smooth.

At this point he got to grasp how smart are the kids of Jerusalem and threw a blessing to the air “If this smart are the kids of Jerusalem, I wish their wisdom would spread in the world”.

 

I have known your name for a very long time and I believe you are on the right track, just make sure that the architecture of your helper is smart enough so that the amplification attack it causes will try to get to the result with less then the power of 2 HIT ratio on the services it’s probing.

 

Yours,

Eliezer  

 

----

Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1ltd@xxxxxxxxx

Web: https://ngtech.co.il/

My-Tube: https://tube.ngtech.co.il/

 

From: squid-users <squid-users-bounces@xxxxxxxxxxxxxxxxxxxxx> On Behalf Of UnveilTech - Support
Sent: Wednesday, 23 November 2022 14:59
To: squid-users@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: [SPAM] Re: Squid 5: server_cert_fingerprint not working fine...

 

Amos,

 

For your information, sslcrtvalidator_program is also not compatible with TLS1.3.

We have done dozen of tests and we only get TLS1.2 information with sslcrtvalidator_program.

 

My « question-conclusion » «  could be ridiculous but the imcompability is here a fact, sorry for that.

Instead a PHP helper we have build a C++ helper (300 lines including comments) and we can also work with TLS1.3 by using basis OpenSSL functions, we suppose the same the Squid uses…

 

PS : OpenSSL is the same we use to compile Squid 5.7.

 

Ye Fred

 

De : squid-users [mailto:squid-users-bounces@xxxxxxxxxxxxxxxxxxxxx] De la part de David Touzeau
Envoyé : samedi 19 novembre 2022 19:19
À : squid-users@xxxxxxxxxxxxxxxxxxxxx
Objet : [SPAM] Re: Squid 5: server_cert_fingerprint not working fine...

 

Thanks Amos for this clarification,

We also have the same needs and indeed, we face with the same approach.

It is possible that the structure of Squid could not, in some cases, recovering this type of information.
Although the concept of a proxy is neither more nor less than a big browser that surfs instead of the client browsers.

The SHA1 and certificate information reception are very valuable because it ensures better detection of compromised sites (many malicious sites use the same information in their certificates).
This allows detecting "nests" of malicious sites automatically.

Unfortunately, there is madness in the approach to security, there is a race to strengthen the security of tunnels (produced by Google and browsers vendors).
What is the advantage of encrypting wikipedia and Youtube channels?

On the other hand, it is crucial to look inside these streams to detect threats.
This is antinomic...

So TLS 1.3 and soon the use of QUIC with UDP 80/443 will make use of a proxy useless as these features are rolled out  (trust Google to motivate them)
Unless the proxy manages to follow this protocol madness race...

For this reason, firewall manufacturers propose the use of client software that fills the gap of protocol visibility in their gateway products or you -can see a growth of workstation protections , such EDR concept

Just an ideological and non-technical approach...

Regards

Le 19/11/2022 à 16:50, Amos Jeffries a écrit :

On 19/11/2022 2:55 am, UnveilTech - Support wrote:

Hi Amos,

We have tested with a "ssl_bump bump" ("ssl_bump all" and "ssl_bump bump sslstep1"), it does not solve the problem.
According to Alex, we can also confirm it's a bug with Squid 5.x and TLS 1.3.


Okay.

It seems Squid is only compatible with TLS 1.2, it's not good for the future...


One bug (or lack of ability) does not make the entire protocol "incompatible". It only affects people trying to do the particular buggy action.
Unfortunately for you (and others) it happens to be accessing this server cert fingerprint.

I/we have been clear from the beginning that *when used properly* TLS/SSL cannot be "bump"ed - that is true for all versions of TLS and SSL before it. In that same "bump" use-case the server does not provide *any* details, it just rejects the proxy attempted connection. In some paranoid security environments the server can reject even for "splice" when the clientHello is passed on unchanged by the proxy. HTTPS use on the web is typically *neither* of those "proper" setups so SSL-Bump "bump" in general works and "splice" almost always.

Cheers
Amos

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux