Search squid archive

Re: url_rewrite_program shows IP addresses instead of domain name when rewriting SSL/HTTPS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Amos,

I kinda solved the problem (Thanks to you!!!)
All what was needed is to peek the important domains in step2 in order not to cause them harm and bump everything else in step3. In this case I'm able to read the dns names in the redirect script and block them accordingly

Here is the relevant part:
acl http_sites dstdomain play.google.com mydomain.com
acl https_sites ssl::server_name play.google.com mydomain.com

ssl_bump peek step1 all
ssl_bump peek step2 https_sites
ssl_bump bump step3 all !https_sites #http_sites won't be bumped anyway. But just to be sure
url_rewrite_access allow all !http_sites

Of course I'm still not able to rewrite https address as discussed, but this is a different story I guess.

The SslPeekAndSplice wiki page needs serious rework though as many of the stuff discussed here are not explained on the page, which makes life really hard for noobs like me. Is there a way to contribute back a little bit by reworking that wiki page? I'll try to write a small post about the SslPeekAndSplice in the next few days.

Many Thanks again for the great help. Really appreciate it

Cheers,
Moataz

On Sun, Jul 10, 2016 at 10:42 AM, Amos Jeffries <squid3@xxxxxxxxxxxxx> wrote:
On 10/07/2016 8:13 p.m., Moataz Elmasry wrote:
> Hi Amos,
>
> Thanks I really learnt alot from your previous email.
>
> going on..
>
> On Fri, Jul 8, 2016 at 1:18 PM, Amos Jeffries <squid3@xxxxxxxxxxxxx> wrote:
>
>> On 8/07/2016 10:20 p.m., Moataz Elmasry wrote:
>>> Hi Amos,
>>>
>>> Do you know any of those 'exceptional' redirectors that can handle https?
>>>
>>
>> I know they exist, some of my clients wrote and use some. But I can't
>> point you to any if thats what you are asking.
>>
>> I can say though there r two things that can reliably be done with a
>> CONNECT request by a URL-rewriter;
>>
>> 1) return ERR, explicitly telling Squid not to re-write those tunnels.
>>
>> This trades helper complexity for simpler squid.conf ACLs. Both simply
>> telling Squid not to re-write.
>>
>> 2) re-write the URI from domain:port to be IP:port.
>>
> Funny thing is when I'm getting the URL in the redirect.bash, I'm not
> getting an IP. I probed and logged in many fields as described in the
> logformat page, and I usually get either the IP or the DNS inside
> redirect.bash but not both
>
>>
>> If the IP it gets re-written to is the one the client was going to, this
>> is in effect telling Squid not to do DNS lookup when figuring out where
>> to send it. That can be useful when you don't want Squid to use
>> alternative IPs it might find via DNS.
>>  (NP: This wont affect the host verify checking as it happens too late.
>> This is actually just a fancy way to enforce the ORIGINAL_DST pass-thru
>> behaviour based on more complex things than host-verify detects)
>>
>>
>>> Ok. So let's ignore the redirection for now and just try to whitelist
>> some
>>> https urls and deny anything else.
>>>
>>> Now I'm trying to peek and bump the connection, just to obtain the
>>> servername without causing much harm, but the https sites are now either
>>> loading infinitely, or loading successfully, where they should have been
>>> blacklisted, assuming the https unwrapping happened successfully. Could
>> you
>>> please have a look and tell me what's wrong with the following
>>> configuration? BTW after playing with ssl_bump I realized that I didn't
>>> really understand the steps(1,2,3) as well as when to peek/bump/stare
>>> etc... . The squid.conf contains some comments and questions
>>>
>>> squid.conf
>>>
>>> "
>>> acl http_sites dstdomain play.google.com mydomain.com
>>> acl https_sites ssl::server_name play.google.com mydomain.com
>>>
>>> #match any url where the servername in the SNI is not empty
>>> acl haveServerName ssl::server_name_regex .
>>>
>>>
>>> http_access allow http_sites
>>> http_access allow https_sites #My expectation is that this rule is
>> matched
>>> when the https connection has been unwrapped
>>
>> On HTTP traffic the "http_sites" ACL will match the URL domain.
>>
>> On HTTPS traffic without (or before finding) the SNI neither ACL will
>> match. Because URL is a raw-IP at that stage.
>>
>> On HTTPS traffic with SNI the "http_sites" ACL will match. Because the
>> SNI got copied to the request URI.
>>
>> The "https_sites" ACL will only be reached on traffic where the SNI does
>> *not* contain the values its looking for. This test will always be a
>> non-match / false.
>>
> Ouch, I now see in the docs that ssl::server_name is suitable for usage
> within ssl_bump. So this is the only use case I suppose.
>
>>
>>>
>>> sslcrtd_program /lib/squid/ssl_crtd -s /var/lib/ssl_db -M 4MB
>>>
>>> http_access deny all
>>>
>>> http_port 3127
>>> http_port 3128 intercept
>>> https_port 3129 cert=/etc/squid/ssl/example.com.cert
>>> key=/etc/squid/ssl/example.com.private ssl-bump intercept
>>> generate-host-certificates=on  version=1
>>> options=NO_SSLv2,NO_SSLv3,SINGLE_DH_USE capath=/etc/ssl/certs/
>>>
>>> sslproxy_cert_error allow all
>>> sslproxy_flags DONT_VERIFY_PEER
>>>
>>> acl step1 at_step SslBump1
>>> acl step2 at_step SslBump2
>>> acl step3 at_step SslBump3
>>>
>>>
>>> ssl_bump peek step1  #Is this equivelant to "ssl_bump peek step1 all ???"
>>>
>>
>> Yes. "all" is a test that always produces match / true.
>>
>> The "ssl_bump peek step1 all" means:
>>  If (at_step == SslBump1 and true == true) then do peeking.
>>  else ...
>>
>>> ssl_bump bump haveServerName !https_sites
>>> #What about connections that didn't provide sni yet? Do they get to have
>>> own definition for step2?
>>
>> For those:
>>
>>  "haveServerName" being a regex "." pattern will match the raw-IP in the
>> CONNECT request, the SNI value, or any subjectAltName in the server
>> certificate. One of those three will always exist and have a value that
>> '.' is matched against. Basically it can't fail - therefore you can
>> consider it just a complicated (and slow / CPU draining) way of checking
>> "all".
>>
>> AND
>>
>>  "https_sites" produces false. The "!" turns that false into true.
>>
>> So that line matches and "bump" action is done at step 2.
>>
>> Bump being a final action means there is no step 3 for those requests.
>>
>> NOTE:  Side effects of bump at step 2 (aka client-first bumping) is that
>> certificate Squid generates will be generated ONLY from squid.conf
>> settings and clientHello details.
>>  No server involvement, thus a very high chance that the server TLS
>> connection requirements and what Squid offers the client to use will
>> conflict or introduce downgrade attack vulnerabilities into these
>> connections.
>>
>>  Whether that is okay is a grey area with disagreement possibilities on
>> all sides.
>>  * On the one hand you are probably configuring good security for the
>> client connection even when the server connection has worse TLS.
>>  * On the two hand you are potentially configuring something worse than
>> some servers.
>>  * On the third hand you are definitely fooling the client into thinking
>> it has different security level than the server connection can provide,
>> or vice-versa for the server knowledge about the client connection. Its
>> risky, and you can expect problems.
>>
>>
>>> #Is this equivelant to "ssl_bump  bump step2 haveServerName
>> !https_sites" ??
>>
>> Yes it is.
>>
>>> #Can I use step2 with some other acl?
>>
>> Er. You can use any ACL that has available data for the time and
>> position at which it is tested.
>>  In other words I would not suggest using ACLs that check HTTP response
>> headers at the ssl_bump checking time.
>>
>> At step 2 of SSL-Bumping process you have client TCP connection details,
>> TLS clientHello details and initial extensions like SNI (well the ones
>> that have been implemented - SNI being the only useful one AFAIK).
>>
>>>
>>> ssl_bump splice all
>>> #Is this now step3 for all?what about those urls who didn't have a match
>> in
>>> step2. Is this step2 for some and step3 for others?
>>
>> Any step2 traffic which fails the "!https_sites" test will match this.
>> Which means there is no step3 for those requests.
>>
>> If you have been paying attention you will have noticed that all traffic
>> passing the "!https_sites" has been bumped, and all traffic failing that
>> same test has been spliced.
>>
>> ==> Therefore, zero traffic reaches step 3.
>>
>> Many thanks for the detailed clarification, this really helps ALOT!!!!
>
>
>>
>> My advice on this as a general rule-of-thumb is to splice at step 1 or 2
>> if you can. That solves a lot of possible problems with the splicing.
>> And to bump only at step 3 where the mimic feature can avoid a lot of
>> other problems with the bumping.
>>
>> You will still encounter some problems though (guaranteed). Don't forget
>> that TLS is specifically designed to prevent 'bumping' from being done
>> on its connections. The fact that we can offer the feature at all for
>> generic use is a terrible statement about the Internets bad lack of
>> security.
>>
>>
>> Cheers.
>> Amos
>>
>>
> Ok. new try.  The following are common configurations:
> "
>
> acl http_sites dstdomain play.google.com mydomain.com
> acl https_sites ssl::server_name play.google.com mydomain.com
>
> http_access allow http_sites
>
> sslcrtd_program /lib/squid/ssl_crtd -s /var/lib/ssl_db -M 4MB
> http_access deny all
>
> http_port 3127
> http_port 3128 intercept
> https_port 3129 cert=/etc/squid/ssl/example.com.cert
> key=/etc/squid/ssl/example.com.private ssl-bump intercept
> generate-host-certificates=on  version=1
> options=NO_SSLv2,NO_SSLv3,SINGLE_DH_USE capath=/etc/ssl/certs/
>
> sslproxy_cert_error allow all
> sslproxy_flags DONT_VERIFY_PEER
>
> acl step1 at_step SslBump1
> acl step2 at_step SslBump2
> acl step3 at_step SslBump3
>
> url_rewrite_program /bin/bash -c -l /etc/squid/redirect.bash
> url_rewrite_extras "%>a/%>A %<A la=%la:%lp la2=%<a/%<a  la3=%<la:%<lp %un
> %>rm myip=%la myport=%lp  ru=%ru ru2=%>ru ru3=%<ru rd=%>rd rd2=%<rd h=%>h
> ssl1=%ssl::bump_mode ssl2=%ssl::>sni ssl3=%ssl::>cert_subject
> ssl4=%ssl::>cert_issuer  rp1=%rp rp2=%>rp rp3=%<rp h1=%>h h2=%>ha"
> logformat squid "%>a/%>A %<A la=%la:%lp la2=%<a/%<a  la3=%<la:%<lp  %un
> %>rm myip=%la myport=%lp  ru=%ru ru2=%>ru ru3=%<ru rd=%>rd rd2=%<rd h=%>h
> ssl1=%ssl::bump_mode ssl2=%ssl::>sni ssl3=%ssl::>cert_subject
> ssl4=%ssl::>cert_issuer  rp1=%rp rp2=%>rp rp3=%<rp h1=%>h h2=%>ha"
> url_rewrite_access allow all
> "
>
> Using
>
> "ssl_bump splice step1 all
> ssl_bump bump step3 all"
>
> Nothing is blocked. And I don't see any urls, nor sni info neither in
> access.log nor in my redirect.log.Only IPs.  I'm trying many https sites.

Because "all" traffic got spliced at step1. Nothing go to the step3 bumping.

Sorry if my general rule-of-thumb description was not clear. I meant
those RoT to be used as a preference for what stage to do splice or bump
- for the things you want them respectively to apply to.
 You still need other ACLs defining what traffic the action is to be
applied on.


>
> Using
> "ssl_bump splice step2 all
> ssl_bump bump step3 all"
>

Splice still happens to "all" traffic.

> Same result.
>
> Using
> "
> ssl_bump peek step1 all
> ssl_bump splice step2  all
> ssl_bump bump step3 all
> "
>
> I can see URLs in the access.log and redirect.log but no IP's. Further I'm
> getting the header forgery warning in the logs, and all pages start
> loading, but never finish. Maybe this is something related to the nat rules
> in the iptables?

No.

peek is  non-final action, grabbing the SNI and clientHello details. It
only stops the current step's ACL evaluation. ssl_bump gets re-evaluated
for future step's.

splice and bump are both "final" actions. SSL-Bumping process in its
entirety stops and does the action chosen. It does not continue to do
any other ssl_bump things once one of them is reached.

In the above peek happens to all traffic, then splice happens to all
traffic.

>
> For info, I'm using the simplest bash redirector for now. Here's the code
> while true;
> do
>     read input;
>     echo "input=${input}"  >>/var/log/squid/redirects.log 2>&1
>     old_url=$(echo ${input} | awk '{print $1}')
>     echo "${old_url}"
>     [[ $? != 0 ]] && exit -1
>     continue
> done
>
>
> I'll try squid4 next week, maybe the result will be better

It won't be much better, the problem so far is in the ssl_bump ACL design.

Amos


_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux