Comments
on draft-dukhovni-opportunistic-security-02 Abstract
The
abstract uses the phrase “the established approach” to refer to
employing
protection against passive and active attacks, or no protection
at all. Since opportunistic
security (OS) is a term being defined for use in IETF standards,
a reader might
interpret the phase in quotes as referring to our standards, vs.
deployment.
While it is true that our security standards generally try to
address both
active and passive attacks, offering no protection at all is not
a goal of
these standards. The author should rephrase the text to clarify
whether he is
discussing standards or deployment. Section
1 Despite
the fact that the abstract says the memo defines OS, the intro
does not do so.
A number of others have made this comment, both during the SAAG
discussion of
the document and during IETF last call, but the author has
repeatedly ignored
these comments. I suggest a definition of OS should appear
within this section. The text
says: Since
protection against active attacks relies on authentication,
which at Internet
scale is not universally available, while communications traffic
was sometimes
strongly protected, more typically it was not protected at all. This
statement is in the
past tense, but the situation described is present tense. The
use of the past
tense here may confuse readers, causing them to believe that the
problem
described is in the past. More importantly, the phrase “at
Internet scale” is
not defined anywhere in the document, yet it is a recurring
theme in the
document. The text
later says: …
with encrypted transmission accessible to
most if not all peers, and protection against active attacks
still available
where required by policy or opportunistically negotiated. Our current
security
protocols offer protection against active attacks “where
required by policy”
(e.g., IPsec) so this text is misleading, again. And the term
“opportunistically negotiated” is not defined, so it’s use here
adds to the
confusion. The next
paragraph refers
to key management, again at “Internet scale” without definition
or explanation.
I’m pretty sure the author intends to refer to “authenticated”
key management,
at a specific (but unstated) granularity of authentication. The term “key
management”
is defined in RFC 4949 (Internet Security Glossary) citing two
well-regarded
sources. It is very broad term, and it does not always imply the
level of
authentication that the author seems to imply. For example,
OSIRM defines key
management as: “The generation, storage, distribution, deletion,
archiving and
application of keys in accordance with a security policy.” Authentication is implied
only in so far as a
security policy mandates it. Thus, for example, keys may be
distributed to a
set of individuals who are all authorized to access a set of
sensitive data.
The identities of these individuals would likely be used in
distributing the
keys, but there might not be any need to bind the identity of an
individual to
the keys per se. Thus the statement in the I-D is overly
simplistic when
discussing key management in general. One can argue
that
authenticated key management for some class of peers is
available at Internet
scale, via the use of X.509 certificates and protocols such as
TLS. In the past
the burden (monetary cost and paperwork) of acquiring a
certificate from one of
the trust anchors embedded in browsers and operating systems.
However, the
situation has changed. Costs have become low (for non-EV
certificates), even
free in some cases (e.g., CAcert.org). The DANE work (e.g., RFC
6698) provides
another example of a way to make use of certificates without
paying a CA. The
bottom line is that a primary motivation for OS is a desire to
remove barriers
to the use of encryption, and removing the need for
authentication based on
certificates is a good way to do this. But, this simple
statement appears
nowhere in this document. (DANE is cited later in this
paragraph, but is
dismissed because DNSSEC is not yet widely deployed. This is an
accurate
statement, but the say it is presented fails to convey the
fundamental issues
at play here.) The second
paragraph in
this section says: The
PKIX ([RFC5280]) key management model, which is based on broadly
trusted public
certification authorities (CAs), introduces costs that not all
peers are
willing to bear. This is
another example of
inaccurate terminology that engenders confusion. I believe the
author meant to
refer to the Web PKI CA model (currently being documented in the
WPKOPS WG),
not to RFC 5280. Subsequent references to PKIX should be removed
whenever they
are not specifically tied to PKIX RFCs. The second
paragraph of
this section is trying to explain why extant means of key
management systems (that
are used in the Internet today), do not meet the author’s goal
of Internet
scale key management. This is an example of the author avoiding
the task of
creating a definition, and instead relying on a set of examples
to convey
meaning. This is a bad strategy for an RFC. The section
ends with a
paragraph that includes this sentence: When
authenticated communication is not possible, unauthenticated
encryption is
still substantially stronger than cleartext. This would be
better
stated as: When
authenticated communication is not possible, unauthenticated
encryption is preferable
to cleartext transmission, relative to the concerns identified
in [RFC7258]. The section
ends with
another observation about OS, without defining it: In
particular, opportunistic security encourages unauthenticated
encryption when authentication is not an option. This is a
fine statement,
that would be appropriate if only a definition of OS had
preceded it. Section 3 This section
describes the
author’s design philosophy for OS, which would be appropriate IF
OS had already
need defined! The first
goal says: The
primary goal of designs that feature opportunistic security is
to be able to
communicate with any reasonably configured peer. I’m pretty
sure that, given
the vagueness of the phrase “reasonably configured peer:” that
this is a goal
of ALL IETF protocol standards. Perhaps the author meat to say
that the goal is
to maximize the ability of communicating peers to encrypt their
traffic. The text then
says: If
many peers are only capable of cleartext, then it is acceptable
to fall back to
cleartext when encryption is not possible. The term
“many” here is
misleading, unless this is referring primarily to some multicast
scenario.
Perhaps the author meant to say: If a peer is not OS-capable, then an attempt
to initiate
unauthenticated, encryption communication may fail. In that
case, plaintext
communication is an acceptable outcome. Of course
this requires
defining what OS-capable or OS-enabled means, but that should
have been done in
Sections 1 and 2 anyway. The text then
says: Interoperability
must be possible without a need for the administrators of
communicating systems
to coordinate security settings. This is an
important
principle but the phrase “security settings” is a bit vague. For
IPsec we know
that configuring access controls is a serious impediment to
deployment. But,
for TLS, is configuration of a mutually acceptable set of
cryptographic
algorithms a problem? I think this statement implies a mandatory
to implement
set of cryptographic algorithms that will be part of OS
standards (else
coordination will be needed to ensure overlap). I suggest this
implied
requirement for OS be mentioned, perhaps parenthetically. Also,
because this
statement appears after the comment on authentication as a part
of OS (if
possible) it opens up the question of whether the security
settings refer just
to encryption or also to authentication. The text needs to be
clearer on this. The text then
says: Applications
employing opportunistic security need to be deployable at
Internet scale, with
each peer independently configured to meet its own security
needs (within the practical
bounds of the application protocol). This wording
suggests that
use of OS is tied to an application. The
TCPINC WG is working on what they view as an OS-motivated
change to TCP
that would not change the TCP API. An application would not be
aware of
encryption provided at that layer. Does the author intend to
rule out a
TCP-based OS approach? If not, the this text needs to be fixed,
and the fix may not be easy. The
description of the
first goal ends with this statement: Opportunistic
security must not get in the way of the peers communicating if
neither end is
misconfigured. Surely we can
state this
intent more clearly, especially since what constitutes
“misconfiguration” of a peer is
not described anywhere in this document. The next goal
states: Subject
to the above Internet-scale interoperability goal, opportunistic
security
strives to maximize security based on the capabilities of the
peer (or peers). The last
three words are
confusing. If we assume that communication involves at least two
parties, then
the capability of the peers trying to engage in communication is
what matters. Change “peer
(or peers)”
to “peers”. The text then
says: For
others, protection against active MiTM attacks may be an option. An MiTM
attack is ALWAYS
active, so the wording here is redundant, and thus potentially
confusing. The text then
says: Opportunistic
security protocols may at times refuse to operate with peers for
which higher
security is expected, but for some reason not achieved. There is a
subtle issue at
play here. It suggests that a protocol that is deemed an example
of OS might
require security services beyond confidentiality, e.g.,
authentication. This
seems to contradict the notion that an OS protocol require no
coordination
between administrators to enable communication, as stated in the
first goal.
Also, the phrase “at times” is misleading, unless it is intended
to suggest
that inconsistent behavior between the same set of peers,
perhaps based on time
of day, is OK. The
description of this
goal ends with the following text: The
conditions under which connections fail should generally be
limited to
operational errors at one or the other peer or an active attack,
so that
well-maintained systems rarely encounter problems in normal use
of opportunistic
security. This sentence
immediately
follows the discussion of refusing to communicate when peers
have differing
requirements for security services. It is confusing, as it
appears to ignore
the scenario described in that sentence. The next
goal, encryption
by default, begins with a string statement, that is muddled by
the choice of
words: An
opportunistic security protocol MUST interoperably achieve at
least
unauthenticated encryption between peer systems that don't
explicitly disable
this capability. Perhaps the
author meant
to say: If two peer systems make use of the same OS
protocol, they MUST,
at a minimum, establish unauthenticated, encrypted communication
when they
connect, unless either has explicitly disabled this capability. Still, this
simpler
statement would conflict with the prior goal of allowing a
system to reject
unauthenticated communication with a peer, e.g., because it
require use of
additional security services. This needs to be reconciled with
prior goals. The text then
says: Over
time, as peer software is updated to support opportunistic
security, only
legacy systems or a minority of systems where encryption is
disabled should be
communicating in cleartext. This seems to
ignore the
possibility that an active attack might prevent peers from
enabling OS, even
though they are both capable. The
description of this
goal ends with a mention of PFS. This is out of place in this
definition. It
should appear elsewhere. The next goal
starts by
using the undefined phrase “string security”. Several others
have suggested
that this phrase be removed from the I-D, and I concur. If the
author wants to
use the phrase, it must be defined in section 2. The
penultimate paragraph
in this section finally offers a definition for OS. This needs
to be moved near
the beginning of this document. The word “actual” should be
elided from the
second sentence of the definition. The definition ends with: …
while allowing fallback to cleartext with peers that do not
appear to be
encryption capable. Change
“encryption
capable” to OS-capable or “OS-enabled”. The final
paragraph in
this section says: When
possible, opportunistic security SHOULD provide stronger
security on a
peer-by-peer basis. This is an
obvious attempt
to make the definition of OS be very broad, well beyond the
primary goal cited
earlier. It also conflicts with the goal of not requiring
coordinated security
configuration between peers, another goal cited earlier. I
suggest the author
try to refine the text in a way that does not result in these
problems. The
last sentence in the section appears to be the authors per
peeve, and probably
does not belong in this document. The use of “MUST” here is also
questionable,
as it is stated w/o explicit context. (The author did not say,
for example,
that this behavior in an OS-capable MTA violates the criteria
established for OS.) Section 4 Again there
is use of the
phrase “strong security” which is problematic for the reasons
cited above. Section 6 The
references are not
described as informative vs. normative. |