Re: [saag] Last Call: <draft-dukhovni-opportunistic-security-01.txt> (Opportunistic Security: some protection most of the time) to Informational RFC

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 05, 2014 at 11:43:02AM -0400, Stephen Kent wrote:

> The abstract uses the phrase "the established approach" to refer to
> employing protection against passive and active attacks, or no protection at
> all. Since opportunistic security (OS) is a term being defined for use in
> IETF standards, a reader might interpret the phase in quotes as referring to
> our standards, vs. deployment. While it is true that our security standards
> generally try to address both active and passive attacks, offering no
> protection at all is not a goal of these standards. The author should
> rephrase the text to clarify whether he is discussing standards or
> deployment.

In the abstract, I think it is not essential to distinguish between
goals and consequences.  The abstract is not blaming anyone, and we
do not need to apportion blame.  One way or another, at present, the
situation is as described.  I am discussing the present state of the
Internet, which results from a combination of standards and deployment,
and it is not clear why additional precision is required here.

> *Section 1*
> 
> Despite the fact that the abstract says the memo defines OS, the intro does
> not do so. A number of others have made this comment, both during the SAAG
> discussion of the document and during IETF last call, but the author has
> repeatedly ignored these comments. I suggest a definition of OS should
> appear within this section.

Since the document's goal is to define opportunistic security,
rather than use such a definition to do something else, I felt it
appropriate to offer motivating material in the Introduction, and
the definition in the body of the draft.  There seemed to be
significant feedback supporting the document as-is.

> The text says:
> 
> Since protection against active attacks relies on authentication, which at
> Internet scale is not universally available, while communications traffic
> was sometimes strongly protected, more typically it was not protected at
> all.
> 
> This statement is in the past tense, but the situation described is present
> tense. The use of the past tense here may confuse readers, causing them to
> believe that the problem described is in the past.

The tense mismatch should probably be fixed at the next opportunity to
revise the document, thanks.

> More importantly, the
> phrase "at Internet scale" is not defined anywhere in the document, yet it
> is a recurring theme in the document.

I don't think anyone is likely to be confused about what "Internet-scale"
means.  Nor is it essential that their interpretation of the phrase
match mine.

> The text later says:
> 
> ... with encrypted transmission accessible to most if not all peers, and
> protection against active attacks still available where required by policy
> or opportunistically negotiated.
> 
> Our current security protocols offer protection against active attacks
> "where required by policy" (e.g., IPsec) so this text is misleading, again.

The above quote leaves out essential context:

   Indiscriminate collection of communications traffic would be
   substantially less attractive if security protocols were designed
   to operate at a range of protection levels; with encrypted
   transmission accessible to most if not all peers, and protection
   against active attacks still available where required by policy
   or opportunistically negotiated.

In this context, the discussion of designs that operate at a *range* of
protection levels, with encryption ubiquitous and stronger protection
used where appropriate, thus a *range* of protection levels.

> And the term "opportunistically negotiated" is not defined, so it's use here
> adds to the confusion.

When protection against active attacks is not unconditional policy,
a party that strives to maximize security might "discover" (say
via presence of DANE TLSA RRs) that its peers can in fact be
authenticated, and thus employ the appropriate measures to thwart
active attacks.  This is an instance of opportunistic security that
goes beyond unauthenticated encryption.  Likely the below would
be an improvement:

   Indiscriminate collection of communications traffic would be
   substantially less attractive if security protocols were designed
   to operate at a range of protection levels; with encrypted
   transmission accessible to most if not all peers, and protection
   against active attacks still employed where unconditionally required
   by policy or else discovered to be possible with a given peer.

> The next paragraph refers to key management, again at "Internet scale"
> without definition or explanation. I'm pretty sure the author intends to
> refer to "authenticated" key management, at a specific (but unstated)
> granularity of authentication.

The ambiguity of "Key Management" has already been discussed
separately.  I think the text is sufficient as-is.  Overly precise
definitions require more ink on side-issues which distract from the
main goal of the document.

> The bottom line is that a primary
> motivation for OS is a desire to remove barriers to the use of encryption,

More strongly:

    * Yes at least encrypt when possible, but more generally,
    * Avoid needlessly weak options, and finally,
    * Strive for stronger security than just unauthenticated encryption,
      with any peer for which this is possible.

Do no forget that during the saag discussion that preceded this
draft, this was one of the main differences between our views, and
that I do not subscribe to the view that opportunistic security is
a narrow response to PM or that it should be limited to promoting
just unauthenticated encryption.

> and removing the need for authentication based on certificates is a good way
> to do this.

Not "removing", rather "not requiring".  We lower the floor,
but not the ceiling of the range of acceptable protections.

> But, this simple statement appears nowhere in this document.
> (DANE is cited later in this paragraph, but is dismissed because DNSSEC is
> not yet widely deployed. This is an accurate statement, but the say it is
> presented fails to convey the fundamental issues at play here.)

That's because the simple statement in question over-emphasizes lowering
the floor, and leaves the treatment of the ceiling unaddressed.   Instead
the draft's design principles are:

    * Interoperate to maximize deployment.
    * Maximize security peer by peer.
    * Encrypt by default.
    * No misrepresentation of security.

The second and third of these both explain that unauthenticated
encryption may well be the best available, and should be employed
when that is the case.

> The second paragraph in this section says:
> 
> The PKIX ([RFC5280]) key management model, which is based on broadly trusted
> public certification authorities (CAs), introduces costs that not all peers
> are willing to bear.
> 
> This is another example of inaccurate terminology that engenders confusion.
> I believe the author meant to refer to the Web PKI CA model (currently being
> documented in the WPKOPS WG), not to RFC 5280. Subsequent references to PKIX
> should be removed whenever they are not specifically tied to PKIX RFCs.

I am not alone in conflating the PKIX standards with their deployed
infrastructure.  Here again higher precision is I think not essential,
however we can avoid the problem cheaply enough:

   Encryption is easy, but key management is difficult.  Key
   management at Internet scale remains an incompletely solved
   problem.  The Web PKI key management model, which is based on
   broadly trusted public certification authorities (CAs), introduces
   costs that not all peers are willing to bear.  Web PKI public
   CAs are not sufficient to secure communications when the peer
   reference identity ([RFC6125]) is obtained indirectly over an
   insecure channel or communicating parties don't agree on a
   mutually trusted CA.

by replacing PKIX with "Web PKI" if that is better.

> The second paragraph of this section is trying to explain why extant means
> of key management systems (that are used in the Internet today), do not meet
> the author's goal of Internet scale key management. This is an example of
> the author avoiding the task of creating a definition, and instead relying
> on a set of examples to convey meaning. This is a bad strategy for an RFC.

The RFC is informational (even aspirational), and the introductory
text is informal motivating background.  Simplicity and accessibility
are prioritized over precision where precision imposes a cost on
the complexity of the document.  Perhaps the trade-off is not quite
right in some places, I welcome further feedback on that point.

> When authenticated communication is not possible, unauthenticated encryption
> is still substantially stronger than cleartext.
> 
> This would be better stated as:
> 
> When authenticated communication is not possible, unauthenticated encryption
> is preferable to cleartext transmission, relative to the concerns identified
> in [RFC7258].

This is fine by me.  Anyone else care to comment?

> The section ends with another observation about OS, without defining it:
> 
> In particular, opportunistic security encourages
> unauthenticated encryption when authentication is not an option.
> 
> This is a fine statement, that would be appropriate if only a definition of
> OS had preceded it.

The definition is in the body of the document, not the Introduction.

> *Section 3*
> 
> This section describes the author's design philosophy for OS, which would be
> appropriate IF OS had already need defined!

Ditto.

> The first goal says:
> 
> The primary goal of designs that feature opportunistic security is to be
> able to communicate with any reasonably configured peer.
> 
> I'm pretty sure that, given the vagueness of the phrase "reasonably
> configured peer:" that this is a goal of ALL IETF protocol standards.
> Perhaps the author meat to say that the goal is to maximize the ability of
> communicating peers to encrypt their traffic.

No, the point here is that opportunistic security is first and
foremost not a barrier to communicating.  Security takes a back
seat to moving the bits, provided none of the peers are misconfigured.

I can replace the word "reasonably" with "correctly" if that is less
confusing.  This is reiterated in the final sentence:

      Opportunistic security must not get in the way of the peers
      communicating if neither end is misconfigured.

What is different here from extant security protocols is the relative
priority of security and interoperability.  With OS interoperability
is generally prioritized over security, when all peers are properly
configured (possibly to offer only weak or no security services).

However, when peers are advertising (say via DANE) particular
security services, then OS is expected to demand that the peer
delivers on its promise (peers that promise, but don't deliver
are misconfigured).

> The text then says:
> 
> If many peers are only capable of cleartext, then it is acceptable to fall
> back to cleartext when encryption is not possible.
> 
> The term "many" here is misleading, unless this is referring primarily to
> some multicast scenario. Perhaps the author meant to say:

Many as in a non-negligible fraction of potential peers as with legacy
applications or infrastructure.  Not "many" as in multicast.

I'll change the text to:

    If a non-negligible number of potential peers are only capable
    of cleartext, then it is acceptable to fall back to cleartext
    when encryption is not possible.

if there are no objections to that.

> If a peer is not OS-capable, then an attempt to initiate unauthenticated,
> encryption communication may fail. In that case, plaintext communication is
> an acceptable outcome.

No, "OS-capable" is not a synonym for "Encryption-capable".  The
two are quite distinct.

> The text then says:
> 
> Interoperability must be possible without a need for the administrators of
> communicating systems to coordinate security settings.

This is borne of a decade of experience with exactly such bilateral
administrator interactions when setting up Web PKI secure channels
for SMTP.  The parties need to agree on which CAs to employ and
what names to validate at the peer domain's MX hosts.

> This is an important principle but the phrase "security settings" is a bit
> vague.

Does this really need elaboration?  The real point is that OS can
be deployed organically with any prior bilateral coordination
between the communicating parties.  Perhaps I should add the word
"prior":

    Interoperability must be possible without a prior need for the
    administrators of communicating systems to coordinate security
    settings.

> But, for TLS, is configuration of a mutually
> acceptable set of cryptographic algorithms a problem?

Yes, though not algorithms, rather trusted CAs and in the case of SMTP also
reference identities, since MX indirection makes the natural reference
identities insecure.

> I think this statement
> implies a mandatory to implement set of cryptographic algorithms that will
> be part of OS standards (else coordination will be needed to ensure
> overlap).

Yes, security protocol designs (OS or otherwise) need to interoperate
by default, but in the case of OS, the need to not get in the way of
communication is even more important, because security is secondary.

> Also, because this statement appears after the comment on
> authentication as a part of OS (if possible) it opens up the question of
> whether the security settings refer just to encryption or also to
> authentication. The text needs to be clearer on this.

Both, but most of the historical friction is with authentication.

> Applications employing opportunistic security need to be deployable at
> Internet scale, with each peer independently configured to meet its own
> security needs (within the practical bounds of the application protocol).
> 
> This wording suggests that use of OS is tied to an application. The TCPINC WG
> is working on what they view as an OS-motivated change to TCP that would not
> change the TCP API. An application would not be aware of encryption provided
> at that layer. Does the author intend to rule out a TCP-based OS approach?
> If not, the this text needs to be fixed, and the fix may not be easy.

I can drop the word "application" if that is an improvement.  There
is no intention to specify the protocol layer in question to be an
"application" protocol.

> The description of the first goal ends with this statement:
> 
> Opportunistic security must not get in the way of the peers communicating if
> neither end is misconfigured.
> 
> Surely we can state this intent more clearly, especially since what
> constitutes "misconfiguration" of a peer is not described anywhere in this
> document.

Misconfiguration here means promising (publishing or negotiating)
security services that are not available or don't work correctly.
How about:

    Opportunistic security must not get in the way of the peers
    communicating if neither end is misconfigured (i.e., neither
    publishes or negotiates security services that are not available
    or don't function correctly).

> The next goal states:
> 
> Subject to the above Internet-scale interoperability goal, opportunistic
> security strives to maximize security based on the capabilities of the peer
> (or peers).
> 
> The last three words are confusing. If we assume that communication involves
> at least two parties, then the capability of the peers trying to engage in
> communication is what matters.
> 
> Change "peer (or peers)" to "peers".

OK.

> The text then says:
> 
> For others, protection against active MiTM attacks may be an option.
> 
> An MiTM attack is ALWAYS active, so the wording here is redundant, and thus
> potentially confusing.

I thought it was reasonable emphasis, but I can drop MiTM here if preferred.

> The text then says:
> 
> Opportunistic security protocols may at times refuse to operate with peers
> for which higher security is expected, but for some reason not achieved.
> 
> There is a subtle issue at play here. It suggests that a protocol that is
> deemed an example of OS might require security services beyond
> confidentiality, e.g., authentication.

Yes, absolutely.  This is one of the main reasons why I wrote an
alternative draft.  Not only might this happen, I am recommending
that OS designs support authentication when possible (peer by peer).

> This seems to contradict the notion
> that an OS protocol require no coordination between administrators to enable
> communication, as stated in the first goal.

No contradiction at all.  Administrator Bob publishes DANE TLSA
RRs.  Administrator Alice independently (before or after Bob's
action) enables DANE TLSA support.  When both are done Alice's
traffic to Bob is authenticated and encrypted.

> Also, the phrase "at times" is
> misleading, unless it is intended to suggest that inconsistent behavior
> between the same set of peers, perhaps based on time of day, is OK.

At times, depends on the peers, and their published capabilities
at the times in question, not just the time of day.  However, it
may be better to say "in some cases".

> The description of this goal ends with the following text:
> 
> The conditions under which connections fail should generally be limited to
> operational errors at one or the other peer or an active attack, so that
> well-maintained systems rarely encounter problems in normal use of
> opportunistic security.
> 
> This sentence immediately follows the discussion of refusing to communicate
> when peers have differing requirements for security services. It is
> confusing, as it appears to ignore the scenario described in that sentence.

The two sentences go together.   Though active attack protection may block
communication, this should only happen when under attack, or when someone
botches their configuration.

> The next goal, encryption by default, begins with a strong statement, that
> is muddled by the choice of words:
> 
> An opportunistic security protocol MUST interoperably achieve at least
> unauthenticated encryption between peer systems that don't explicitly
> disable this capability.
> 
> Perhaps the author meant to say:
> 
> If two peer systems make use of the same OS protocol, they MUST, at a
> minimum, establish unauthenticated, encrypted communication when they
> connect, unless either has explicitly disabled this capability.

How is this different or better?

> Still, this simpler statement would conflict with the prior goal of allowing
> a system to reject unauthenticated communication with a peer, e.g., because
> it require use of additional security services. This needs to be reconciled
> with prior goals.

No conflict.  This discusses the floor case of *at least*
unauthenticated encryption.  The other text is about the
ceiling case of authenticated encryption when possible.

> The text then says:
> 
> Over time, as peer software is updated to support opportunistic security,
> only legacy systems or a minority of systems where encryption is disabled
> should be communicating in cleartext.
> 
> This seems to ignore the possibility that an active attack might prevent
> peers from enabling OS, even though they are both capable.

The word "should" allows for some exceptions, such as an occasional
active attack.

> The description of this goal ends with a mention of PFS. This is out of
> place in this definition. It should appear elsewhere.

Why is it wrong to encourage the encrypt by default goal to use
PFS? Your own draft made PFS a central goal of OS.  Must this be
a separate design principle?  Why?

> The next goal starts by using the undefined phrase "strong security".
> Several others have suggested that this phrase be removed from the I-D, and
> I concur. If the author wants to use the phrase, it must be defined in
> section 2.

It is largely gone, now in just two places, otherwise mostly replaced by
"protection against both passive and active attacks" or similar language.

Now in this paragraph we're talking about *representation* to users,
and here I see few better alternatives.  On UI issues "strategic
ambiguity" that gets the point across is important.

> The penultimate paragraph in this section finally offers a definition for
> OS. This needs to be moved near the beginning of this document.

I still prefer to defer the definition until after the Introductory
material that motivates its and the design principles that it summarizes
and which are a core part of the definition.

> The word
> "actual" should be elided from the second sentence of the definition. The
> definition ends with:

Given unequal security policy floors and security policy ceilings, actual
protection may differ from minimum required or maximum available.

> ... while allowing fallback to cleartext with peers that do not appear to be
> encryption capable.
> 
> Change "encryption capable" to OS-capable or "OS-enabled".

The peer may not be OS-capable, but may still support encryption.
The cleartext fallback is for lack of encryption support.

> The final paragraph in this section says:
> 
> When possible, opportunistic security SHOULD provide stronger security on a
> peer-by-peer basis.
> 
> This is an obvious attempt to make the definition of OS be very broad,

Not "obvious attempt", but rather clear and multiply stated goal of the
draft.  And as mentioned earlier a clear point of difference during the
saag discussion, and the key reason this draft exists.

> well beyond the primary goal cited earlier.

No, exactly as intended and stated.

> It also conflicts with the goal of
> not requiring coordinated security configuration between peers, another goal
> cited earlier.

No conflict.  See DANE Alice/Bob example above.

> The last sentence in the section appears
> to be the authors per peeve, and probably does not belong in this document.

It clearly illustrates much of what is wrong with the status quo
and how OS can make things better.  The blunder in question is
rather common, and even if the draft achieves nothing beyond
discouraging such blunders, that would progress.

> The use of "MUST" here is also questionable, as it is stated w/o explicit
> context. (The author did not say, for example, that this behavior in an
> OS-capable MTA violates the criteria established for OS.)

This blunder MUST be avoided in any MTA, OS or otherwise, but an
appreciation for the fact that the MTA in question is in fact
engaging in OS (by falling back from authenticated transmission),
and making a botch of it (by falling back to cleartext instead
of continuing the encrypted session) is helpful.

> *Section 4*
> 
> Again there is use of the phrase "strong security" which is problematic for
> the reasons cited above.

I think by this point it is understood what this means given all
the earlier text.  If necessary, it can be made more verbose as
in most of the other cases.

> *Section 6*
> 
> The references are not described as informative vs. normative.

The draft is informational, so I thought this was not essential.
If the separation is none-the-less important, that's an easy fix.

-- 
	Viktor.





[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]