Dear colleagues:
I had another look at the (now updated) draft: draft-dukhovni-opportunistic-security-02. Some more comments: (a) The term "perfect forward secrecy" (Section 2) is not defined properly. Suggested changes: (i) change "that are derived" towards "that were previously derived"; (ii) remove "or distributed using". This also In fact, it may be best to just use the terminology definition of RFC 4949 (For a key agreement protocol, the property that compromises long-term keying material does not compromise session keys that were previously derived from the long-term material.) (b) The verbiage in the draft should make it more clear that opportunistic encryption will not jeopardize those entities that wish to only set-up secure and authentic channels, by policy. Essentially, there are three channel categories: (i) unsecured channel; (ii) channel providing confidentiality only; (iii) channel providing confidentiality and authenticity. While opportunistic security aims to enable a shift from (i) to (ii), it may also cause a shift from (iii) to (ii) -- see also the concerns I expressed July 11, 2014, 10:08am EDT [copied at end of this email]. -- There should be language in draft 02 that expresses this concern (which would be a security loss), e.g., with the security consideration section. -- The current language re "design for interoperability" leaves ambiguity as to whether one cares about channels where one would like to enforce, by security policy, channel category (iii). Unless I understand this sentence wrong, the phrase "if authentication is only possible for some peers, then it is acceptable to authenticate only those peers and not the rest" seems to suggest lowest-common denominator security policies (opening the door to downgrade attacks, denial of service attacks, etc.). Currently, the phrase "interoperability must be possible without a need for the administrators of communicating systems to coordinate security settings" suggests that a system where one authenticates entities via certificate verification of a CA root key acceptable to the receiving end of a communicated cert would be taboo (thereby, killing off a specific way of realizing (iii) using certs); similarly, this seems to preclude communication settings where either end may have a white list of raw public keys. A much more useful approach seems to be to curtail user/device privileges depending on whether one ends up with channel category (i), (ii), or (iii), where policies may explicitly include blocking communications if channel category (iii) is not achieved, but may also allow specific message types to pass through if one only has an unsecured channel (category (i)) at hand. -- What about the following change at end of the section on "interoperability" (first para of Section 3): replace by "Opportunistic security must not get in the way of the peers communicating if neither end is misconfigured and neither end precludes communicating with the other end by virtue of its own security policy". Note: Right now, the term "misconfigured" is not really defined, but the suggestion from the entire draft is that one should allow channel type (ii), no matter what. This requires a finer line: ultimately, security policies should determine this. From that point of view, the email response below is somewhat scary: (c) With the "maximize security peer by peer" para (Section 3), the phrase "opportunistic security may at times refuse to operate with peers for which higher security is expected, but for some reason not achieved" is somewhat cryptic. If the intention is to capture that ultimately it is up to each peer to enforce its own security policy as to channel category (i), (ii),or (iii), {which I do hope} this could be made more clear. Right now, this para has highly ambiguous language such as "the conditions under which connections fail should generally be limited to operational error t one or the other peer or an active attack, so that well-maintained systems rarely encounter problems in normal use of opportunistic security", which - to me - spells trouble since suggesting sloppy security policy enforcement (or, does operational error include a security policy mismatch?).[excerpt of email Viktor Dukhovni, Wed Aug 6, 2014, 9:41am EDT] Why are we making the fallback conditional on multiple peers?We are not. Rather the idea was that as deployed systems that speak a legacy protocol migrate from just cleartext to OS, once only a negligibl minority support only cleartext, the bar could be raised to "at least encrypt" leaving the negligible minority in the dust. In such situations it will still be possible to apply local policy overrides to communicate with the laggards, but otherwise, everyone will refuse to fail to encrypt. (d) It may be good to articulate that, e.g., a security handshake that by itself only provides for channel type (ii) may provide some additional information to another process that allows elevating this to channel type (iii). An example hereof would be TLS with server authentication, where one does not really verify the cert as part of TLS, but simply sets up an anonymous channel (e.g., using ephemeral Diffie-Hellman) as part of TLS, and has another process, e.g., at the higher layer, verifying authenticity info (e.g., by verifying the certificate chain). This seems to be in line of what some people (e.g., Nico Willems) have suggested re including some language re interface aspects (DANE, etc.). Best regards, Rene On 7/11/2014 10:08 AM, Rene Struik wrote: Dear colleagues: -- email: rstruik.ext@xxxxxxxxx | Skype: rstruik cell: +1 (647) 867-5658 | US: +1 (415) 690-7363 |