On 01/10/10 20:39, Boniforti Flavio wrote:
Hello there.
I've been googling around and reading some list posts about using
transparent proxy with HTTPS (TCP 443) requests, but I didn't understand
if there *today* is a solution to it.
My goal is as follows: I want *every browser traffic* be transparently
caught by my squid proxy. None of my clients shall be able to surf
without passing through my squid setup, which I'll be using mainly for
filtering purposes (block domains). But my second purpose of squid, is
to generate webalizer stats which comprehend 100% of the web traffic.
My questions:
1) is it in any way possible to have HTTPS traffic (TCP port 443) be
intercepted and sent to my proxy?
Yes of course.
It's only when packets start going back to the client where things go
wrong. Starting with the fact that your proxy is unable to send the
security credentials belonging to whichever website the client was
visiting (they are private to the websites servers).
The clients web browser pops up a "somebody is forging this website"
message unless you are able to provide the client browser with a CA
certificate which trusts your proxy.
2) which motivations are behind eventually *not* being able or not
needing to intercept that sort of traffic?
Motivations? Forgery prevention. Hijacking prevention. The basic design
goals of HTTPS.
Consider: do *you* want to login to your bank account with the knowledge
that some unknown admin halfway across the world is able to read the
pages that you are loading and see your passwords?
If they can read it, they can as easily change the page details.
3) would I completely miss the traffic done in HTTPS in my webalizer
stats, if there'd be no way to have transparently proxied HTTPS
requests?
This is only a problems due to the "transparent".
If you can discard the "transparent" part of the setup the client
browsers will send their HTTPS requests to Squid using CONNECT method,
which gives webalizer all the client IP and destination domain details
along with traffic sent/received there. All thats missing is the
particular files being fetched.
Alternatives are to use firewall traffic accounting which can just as
easily be gathered. Such as which client IP is using port 443 (HTTPS) to
contact which external IPs and how much traffic they sent/received.
Ah, BTW: as I *do not* intend to cache HTTPS traffic/requests, would it
be easier to set up this sort of "logging/filtering"?
What is easier depends on your network setup.
"transparent" is the easy/lazy way to proxy traffic. The costs (in terms
of extra problems) far outweigh the ease of initial configuration as you
are finding.
I find for long term the easiest way to capture traffic is to use WPAD
as a primary layer (also called "transparent configuration" of
browsers). With NAT interception (also called "transparent" proxying) as
an under layer last-resort bouncing people to a page instructing how to
setup their browser for WPAD. (ie where to find and click the
"auto-detect" button.). You can find config info available under the
term "captive portal".
Amos
--
Please be using
Current Stable Squid 2.7.STABLE9 or 3.1.8
Beta testers wanted for 3.2.0.2