Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 






On Fri, Sep 6, 2013 at 9:20 AM, Pete Resnick <presnick@xxxxxxxxxxxxxxxx> wrote:
On 9/6/13 12:54 AM, t.p. wrote:
----- Original Message -----
From: "Phillip Hallam-Baker" <hallam@xxxxxxxxx>
Cc: "IETF Discussion Mailing List" <ietf@xxxxxxxx>
Sent: Friday, September 06, 2013 4:56 AM

The design I think is practical is to eliminate all UI issues by insisting that encryption and decryption are transparent. Any email that can be sent encrypted is sent encrypted.

That sounds like the 'End User Fallacy number one' that I encounter all the time in my work. If only everything were encrypted, then we would be completely safe.

Actually, I disagree that this fallacy is at play here. I think we need to separate the concept of end-to-end encryption from authentication when it comes to UI transparency. We design UIs now where we get in the user's face about doing encryption if we cannot authenticate the other side and we need to get over that. In email, we insist that you authenticate the recipient's certificate before we allow you to install it and to start encrypting, and prefer to send things in the clear until that is done. That's silly and is based on the assumption that encryption isn't worth doing *until* we know it's going to be done completely safely. We need to separate the trust and guarantees of safeness (which require *later* out-of-band verification) from the whole endeavor of getting encryption used in the first place.

Actually, let me correct my earlier statement. 

I believe that UIs fail because they require too much effort from the user and they fail because they present too little information. Many times they do both.

What I have been looking at as short term is how to make sending and receiving secure email to be ZERO effort and how to make initialization no more difficult than installing and configuring a regular email app. And I think I can show how that can be done. And I think that is a part of the puzzle we can just start going to work on in weeks without having to do usability studies.


The other part, too little (or inconsistent) information is also a big problem. Take the email I got from gmail this morning telling me that someone tried to access my email from Sao Paulo. The message told me to change my password but did not tell me that the attacker had known my password. That is a problem of too little information.

The problem security usability often faces is that the usability mafia are trained how to make things easy to learn in ten minutes because that is how to sell a product. They are frequently completely clueless when it comes to making software actually easy to use long term. Apple, Google and Microsoft are all terrible at this. They all hide information the user needs to know.

I have some ideas on how to fix that problem as well, in fact I wrote a whole chapter in my book suggesting how to make email security usable by putting an analog of the corporate letterhead onto emails. But that part is a longer discussion and focuses on authentication rather than confidentiality.


The perfect is the enemy of the good. I think that the NSA/GCHQ has often managed to discourage the use of crypto by pushing the standards community to make the pudding so rich nobody can eat it.

 

--
Website: http://hallambaker.com/

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]