Re: Status (was: Last Call: <draft-farrell-perpass-attack-02.txt> (Pervasive Monitoring is an Attack) to Best Current Practice)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 






On Wed, Jan 1, 2014 at 7:42 PM, Scott Brim <scott.brim@xxxxxxxxx> wrote:
On Wed, Jan 1, 2014 at 3:34 PM, Andrew Sullivan <ajs@xxxxxxxxxxxxxxxxxx> wrote:
> On Wed, Jan 01, 2014 at 12:13:31PM -0800, Dave Crocker wrote:
>> ps.  My own suggestion is Experimental.
>
> Please, no.  That status has been abused enough.  What would you be
> trying to learn by this experiement?  What would the conditions be
> that you would conclude you knew the answer?  If you can't answer
> those questions, "Experimental" is just wrong.

Right. This is not a protocol that needs field testing, it's a
statement of policy.

The thing that needs to be clarified is apparently what the policy is.

+1 

The problem here in my view is that the appropriate privacy protections depend on what level of the stack a protocol sits. And so it is not appropriate to say that 'every protocol must do <splunge> because it is only appropriate to address certain concerns at certain layers.

For example, attempting to address traffic or meta data leakage at the application layer is generally a bad plan because that information is needed to route the data packets. 


It is not clear to me that the IETF can actually address traffic privacy because that probably requires changes at the link layer. Yes, I know about TOR but the privacy protections come at a huge cost in efficiency and use of TOR is sufficiently limited that when Harvard had the bomb scare the other week, the authorities simply brought the five people using TOR on campus in for questioning to find the culprit.

If I am giving packets to a carrier and expect them to route them to their destination then I kind of expect them to know where they are being sent. 

We could certainly propose link layer schemes that would allow the carrier to avoid leaking the information to a third party but those seem to be 'IEEE' territory rather than IETF territory.


Where I see the document being useful is not necessarily as a pass/fail criteria for standards proposals but as a basis for chartering new WGs. I have some proposals and so do others. Should we be looking for one working group, two, many? How should the effort relate to past work, etc?

For the past 20 years the bulk of IETF work has been driven by enterprise and government security concerns rather than end user concerns. So one consequence is that the protocols we have are only 'good enough for government work'. IPSEC and S/MIME work about as well as my Palm Treo did. The Treo was certainly adequate for my business needs at the time but it was not an iPhone and not even close.

IPSEC could have worked so much better, so could S/MIME and PGP. There isn't an enterprise business case to do that but the work is essential if we are going to get a billion people using a security protocol.


I now have running code (incomplete but it does run) that demonstrates that 'frictionless cryptography' for email is achievable using existing clients (no plug ins) and a few slight changes in approach. By frictionless I mean that sending and receiving encrypted mail requires absolutely no more effort than normal mail. That is ZERO additional mouse clicks. 

We could have done this ten years ago but the enterprise market that buys S/MIME and OpenPGP products does not demand frictionless cryptography. Only the general public is that demanding.



--
Website: http://hallambaker.com/

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]