Re: Is traffic analysis really a target (was Re: [saag] Is opportunistic unauthenticated encryption a waste of time?)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



My first reaction to the proposal of having traffic carry metadata to describe the protocol was that it would be a lot more efficient for middle box manufacturers to read RFCs. That way they would know all they needed to know about the protocol without having to analyze the metadata for each session. Moreover, sessions would not need all of that metadata just to describe themselves.

Then I thought that there is a simplicity and elegance to self-describing protocols.

Then I realized that people would complain about the chattiness and overheads of self-describing protocols, and we would end up publishing profiles of metadata. Like SERVICE_PROFILE=25 for mail retrieval, SERVICE_PROFILE=587 for mail submission, SERVICE_PROFILE=5060 for VoIP, SERVICE_PROFILE=389 for directory lookup, etc. We might even call those profiles RFC 5321, RFC 6409, RFC 3261, and RFC 4511, to save IETF work.

On Aug 26, 2014, at 1:44 AM, Nico Williams <nico@xxxxxxxxxxxxxxxx> wrote:

> On Sun, Aug 24, 2014 at 12:32:15PM -0400, Eric Burger wrote:
>> I am concerned with the drive to make all traffic totally opaque.
>> I’l be brief: we have an existence proof of the mess that happens
>> when we make all traffic look benign. It is called “everything over
>> port 80.” That ‘practical’ approach drove the development of deep
> 
> Benign?  No, that's not it.  Ports 80 and 443 (*not* just 80) are used
> for everything for a variety of reasons, one of which is that no one
> could block them entirely, so every site with a firewall simply had to
> have the capability to, and processes for permitting HTTP/HTTPS traffic
> -- they could NOT afford not to!
> 
> Whereas protocols on other ports...  See below.
> 
>> packet inspection, because everything running over port 80 was no
>> longer HTTP traffic. It meant we could no longer prioritize traffic
>> (in a good sense - *I* want to make sure my VoIP gets ahead of my Web
>> surfing ahead of my FTP). It meant we could no longer apply enterprise
>> policy on different applications. It drove ‘investment’ in the tools
>> that today dominate pervasive monitoring.
> 
> It's true that using HTTP as the IP of the 'Net hurt all sorts of
> things, but it was driven by the massive adoption of HTTP.  Remember the
> term "application gateway"?  What a throwback to the late 80s, early
> 90s.  Application gateways are unheard of now because they're ETOOHARD.
> 
> Firewalls can't cope with a raft of arbitrary, custom protocols, whether
> over IP or over HTTP, but with HTTP they get somewhat more metadata to
> examine.  If you really want to draw a lesson here it is this:
> application protocols need a firewall-friendly substrate of metadata.
> That's HTTP -- no other such substrate exists.
> 
> Sure, it's a bit of a mirage: the HTTP metadata can be faked.  But at
> least with HTTP the firewalls^Wproxies can make sure to get hostnames
> every time, not just IP addresses.
> 
> That's my take.  Maybe it's wrong, but it seems at least plausible.
> 
> If VoIPs and such used different port numbers but still HTTP... they'd
> get through firewalls eventually and you could get your traffic
> prioritization.  It's not so much ports 80/443 that matter.  It's the
> HTTP headers request line, status line, and headers that do.  You could
> do WebSockets or otherwise tunnel anything over HTTP and the firewalls
> will be happy to let you, IF they like your metadata.
> 
> Nico
> -- 
> 

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]