Re: IANA blog article

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[trimmed to just the parts I am responding to]


On Sat, Jan 4, 2014 at 3:29 AM, Jari Arkko <jari.arkko@xxxxxxxxx> wrote:

> I think that the article needs to go back to some basic principles.
>
> The whole point of the end-to-end architecture was that it enabled innovation and diversity. In particular it was possible to add applications to the Internet without going to some central authority to ask permission.

I very much agree with this principle. Although maybe that's a topic for another blog article or discussion…

I know that it is what everyone agrees on. But some people also see the mission of the IETF as to protect the Internet from harmful experimentation. This is silly for two reasons, first is is a LOT harder to bring down the Internet than most people imagine, second the people who are likely to cause such an issue accidentally don't taken any notice of the IETF or IANA in any case.

> The status quo for most registries is that they are permissive but there is a large amount of difference in the criteria and there are experts involved doing reviews for some systems but not others. There is a general lack of consistency. A standards organization should look for consistency.

I think we are trying. Can you be more specific about areas where there is lack of consistency? We can try to improve rules about specific IANA registries, provide more guidance to experts, etc. if we are not performing well enough in some area.

I see the problem mostly in the crypto area where the tradition has been that each protocol defines its own registry of schemes. This is a bad approach. I would like to see a single registry that keeps track of the OID, text label and URI identifications of each algorithm and all crypto applications should use the code points in that registry.

Choice of crypto algorithms is an ongoing maintenance task. It is not therefore a task that can or should be performed by Working Groups unless we want the Working Groups to become permanent.

IETF should only endorse a single core set of algorithms with a maximum of one preferred and one alternative algorithm and this should be an IETF consensus, not a WG consensus. A WG might add in additional recommendations to support legacy interop, for example 3DES is still a defacto requirement for S/MIME.

All other crypto maintenance should be achieved through a first-come common registry.


The mere existence of a code point would not mean that it can or should be used with IPSEC or S/MIME. None of us want to use MD2 or DES for example, even though the code points are well known. All a code point means is that the behavior of the protocol is defined. Caveat ingenio.


One case that will come up soon is this one
http://datatracker.ietf.org/doc/draft-ietf-jose-json-web-algorithms/

For the sake of saving a few bytes in a protocol using a text based encoding, the OpenID people decided to use their own text based identifiers for crypto algorithms. So the JOSE working group is wanting to create a new set of algorithm identifiers for all the existing algorithms.

This is not a good practice for a standards organization. We have a perfectly good set of text based labels already from the PEM days. Even though PEM is deader than a dodo, the code points are widely used and we have an existing registry.


The response to the JOSE demand for a new registry should be the following:

1) Reserve the code points already assigned in OpenID in the common registry for the use of OpenID alone.

2) Require all applications that are not OpenID to use the common registry.
 

There are some things that are legitimately Working Group decisions. Forking the IANA crypto registries is not. 

* Each code point should identify an algorithm (plus possible parameters) unambiguously

* A single algorithm can be identified by multiple code points.

* Each algorithm should have a single canonical code point which MUST be accepted by all conforming applications.


So OpenID implementations would still be free to (and indeed required to) use the legacy code points for backwards compatibility. But they would be required to accept the common labels as well.

I would further suggest that when we next do a major revison of TLS or IPSEC that is going to affect the algorithm negotiation scheme, we insert one new code point for 'suite indicated in an extension' and this be an indirect to a scheme based on the common registry. It would still be possible to support suite restrictions but these would be implemented in code and could support expressions of the form 

(RSA2048 or RSA4096) and (AES128 or AES192 or AES256) and (..)
or
(ECDSA-splunge) and (AES256)

etc.

> Another is that application protocols should be required to reuse code points from common registries rather than define their own.
>
> At the moment we have separate crypto registries for TLS, IPSEC, PEM, PKIX and XML Digital Signature. The JOSE folk want to create another. There should be a policy that tells people from the start that there will be no new crypto registries.

Here I am not so sure. Registries for adding specific crypto algorithms are not merely number allocations; they go with specifications and code that actually runs, say, AES on IPsec or AES on TLS. It is not entirely clear to me that crypto across different protocols and use cases should proceed in lock step. And even if it were useful, it is a difficult change to make retroactively, when the code points in different protocols started out differently.

The definition of a code point is an administrative issue. The only case where this is standards issue is when some doubt arises because a spec is insufficient or misinterpreted. 

The decision to require support of a specific algorithm set to enable interoperability is not really WG material either because it is a decision that has to be revisited over time. 

I don't see much value in having an argument over the choice of crypto in WG foo and then a year later start the whole debate from scratch on protocol bar. Especially when it is the same set of people and the arguments have not changed.

Crypto algorithms are by design a commodity. If they are not plug and play then something is broken in the specification of the algorithm or the protocol.

While crypto protocols are broken down into separate WGs, security is a unified concern across all protocols and works as a system. It is implemented in the same way.


The reason we change algorithm recommendations is that the security evaluations of the protocols change. So when we deprecated DES, it was in all IETF protocols, not just one.

We are currently phasing out use of SHA-1 across all deployed crypto applications that rely on PKIX. This is not an IETF action or even a CABForum action, it is the consequence of Microsoft deciding to bite the bullet and declare that they are going to drop support for SHA-1 in certain applications which will in turn force the entire WebPKI to follow suit. Which is an outcome that the whole security community believes should happen, the only dispute has been over timing.


From an implementer point of view, having the IETF periodically issue an RFC saying that the recommended crypto algorithms are X, Y and Z would be a big help to us because then someone can take the OpenSSL (or Cryptolib etc) library and subset it to only those ciphers knowing that they can support all the IETF apps.

It would not be a particularly controversial project either. The consensus in the crypto world today is clearly for the following set of required algorithms:

Public Key Encryption:  RSA  (RSA-2048, RSA-4096)
Public Key Signature:  RSA  (RSA-2048, RSA-4096)
Public Key Exchange: DH (DH-2048, DH-4096)
Encryption: AES (AES-128, AES-192, AES-256)
Digest: SHA-2 (SHA-2-128, SHA-2-192, SHA-2-256)
MAC: HMAC-SHA2*

OK, the last we could argue on. We can also argue about the block cipher modes and whether encrypt and authenticate modes are consensus. But those are decisions where the application requirements do actually have an impact. AES-CBC is not interchangable for AES-CFB.

And for alternatives we would essentially be looking at Suite-B plus SHA-3 with the massive caveat that IPR FUD is going to continue to make them unusable for many applications for quite a while. However happy I am with the litigation risk, the IPR holder can go after my customers. Since RIM is up for sale and the patents likely to be bought by a troll, it is not a risk the industry can currently take. 

[Though if the NSA wants to say sorry for spying on the crypto community and sabotaging standards, they could buy the patents themselves and make them public domain]


That at any rate is what I mean by consistency. I think the IAB should look at consolidation of IANA registries as one mechanism for achieving greater consistency across protocols and avoiding unnecessary variations. The IAB should not be responsible for deciding which algorithms we use (for example) but they should take responsibility for deciding that the set of algorithms will be decided in one place and not many.



--
Website: http://hallambaker.com/

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]