Re: Last Call: <draft-ietf-lamps-eai-addresses-05.txt> (Internationalized Email Addresses in X.509 certificates) to Proposed Standard

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 24, 2017 at 07:31:09PM -0000, John Levine wrote:

> >My impression is that there is little problem with the intended
> >underlying spec, but the document text needs some tuning.
> 
> Agreed.  The subsequent section on comparing names would likely
> benefit from clearer instructions, e.g.
> 
> a) if the domain contains A-labels, turn them into U-labels
> b) if the domain still contains R-LDH labels, stop, not a valid name.
> c) if the domain contains NR-LDH labels, make them all the same case
> d) do a straight byte comparison of the addresses

I think that restricting R-LDH labels that are not A-labels is too
strict, see for example Phil Hallam-Baker's proposed encapsulation
of email addresses with "the mesh" (attached).

  TL;DR: alice@xxxxxxxxxxxxxx--MBTVK-ZKCWT-KHMTE-XM3I7-GSQNK-MLFYE

The "mm" R-LDH namespace (if/when implemented) or some other future
namespace should probably not be excluded at this time.

If the intent is to canonicalize A-labels to U-labels, then perhaps
only "xn--" labels should be proscribed, if any.

On the other hand, I see there is some support for allowing
certificates to contain whatever form is actually used in email
headers, and perhaps more than just one such form.  If so, there
is no need to proscribe any names at all.  The client performs no
conversions, and the certificate would need to be inclusive of any
addresses that are in actual use.

-- 
	Viktor.
--- Begin Message ---
One of the big problems I have found in trying to argue for ways we can improve Internet security is that there are two camps. The incrementalists will only look at solutions that provide an improvement on the status qujo in one area and the perfectionists insist that any solution that does not solve every possible problem isn't worth considering.

How about we do both?

Also to save time:

* Yes, I understand the deployment problem very well, I worked with the guy who solved the network hypertext deployment problem after 20 years of people failing. 
* Yes, everything is end to end secure, 
* Yes the transport is also secure to prevent metadata attacks.
* Yes it works with OpenPGP
* Yes it works with S/MIME
* Yes you can use it with SMTP/IMAP/POP
* Yes you can use it with Jabber
* No, you do not have to use a CA
* Yes, you may use a CA where a CA adds value
* Yes, I have considered the lock in problem
* Yes, you can bolt in your own trust model
* REST/HTTPS/JSON.
* AES/RSA/DiffieHelman, moving to CurveX for production.
* Yes it is unencumbered, apart from one part which will unlock this year.
* Yes I have a strategy for QRC
* No, it is not a walled garden
* Yes, you can adapt the system to use escrow cryptography but not without the parties knowing so this is a frontdoor, not a backdoor.
* My employer sells endpoint protection systems, the Mesh is not a substitute for endpoint protection, thank you for your interest.
* Reference code runs on Windows, Linux and should run on OSX. It is all MIT license, so are the protocol compiler tools.

* Yes, it is easy to use. In fact it is exactly as easy to use as the existing protocols because the user doesn't need to know that they are doing security
* Yes it is easy to configure.
* Yes, I need help, a lot of help

OK so how is this possible?

First observation is that we now have several protocols that provide users with end to end security that are really easy to use. The only real problem I have with those systems is that they operate inside walled gardens. They are not going to be a replacement for email.

Second is that public key cryptography is now cheap in terms of both cost and performance. We couldn't do very interesting crypto in the 90s because machines bogged


There are three contributions made by the Mathematical Mesh:

1) An infrastructure for managing and using client keypairs.

Adding cryptography to a protocol is actually quite easy when both parties have public keypairs. So if we have an infrastructure that allows a user to 'glue' all their devices together into a personal mesh such that they all have keypairs provisioned for each cryptographic purpose they might need them for, cryptography becomes really easy for the user.

2) Extend a direct trust model into the DNS

We all know about TOR and onion routing. Well what if I could have an email address that included my OpenPGP fingerprint? Well we can. Just use the xx-- DNS label prefix to mark the fingerprint as not an ICANN DNS labnel and we can make the fingerprint the TLD:

alice@xxxxxxxxxxxxxx--MBTVK-ZKCWT-KHMTE-XM3I7-GSQNK-MLFYE

This means, 'only send mail to this address in compliance with a policy signed under the fingerprint MBTVK...' Said policy could be an OpenPGP key or it could be a Mesh mail profile.

3) Use of Proxy Re-Encryption (Recryption)

HTTPS provides end to end encryption between the user's client and the web site serving the content. Using Proxy Re-Encryption we can provide end to end encryption between the content creator and the user's client. This is the part of the system that will be locked until late this year, (not that I am sure the patent is valid, why bother checking when it expires)

We all know that two key cryptography is better than one. It is better because encryption and decryption are different functions and using separate keys for separate functions allows these to be done by different parties. Recryption is three key cryptography and it is better than two key for the same exact reason.

Using recryption allows us to develop protocols in which Alice is able to publish a single encryption key but read her email on three different devices, each of which has a separate decryption key so that she can mitigate the risk if one of the devices is lost.


Job one is making it easy to manage client side keys. 

Once we have that infrastructure, all else becomes straightforward. My original goal with the Mesh was to make it easy to configure S/MIME and OpenPGP. David Clark asked me to add SSH which I immediately realized could be the killer app because the big problem with configuring SSH is that if you do it to a machine at a remote site, 1000 miles away and you screw up, someone has to get on a plane to fix it.

However, any infrastructure that allows mail application to say 'here are the encryption keys to use to send mail to me and when to use them' can also say 'I accept JSON format nextgen mail'. So this is an infrastructure that allows us to lock down the legacy email systems as best as can be managed but also eliminates the biggest barrier to deployment of a successor system.


I am looking for help to make this happen.



--- End Message ---

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]