Jari Arkko wrote:
David Hopwood wrote:
> At the MASS/DKIM BOF we are being required to produce such a thing as a
> prerequisite to even getting chartered as a working group.
A more pertinent request at that stage might be, "Please clarify the
security requirements for this protocol." IOW, what is the protocol
supposed to enforce or protect, under the assumption that it will be
used in the Internet environment with the "fairly well understood
threat model" described above?
Hmm. It may be that its well understood what the threat model in the
Internet. (But if so, why are we having so many problems?)
Several reasons (which are not independent):
- most of the protocols that we *deploy* are not secure in that model.
We need to pay more attention to aspects of protocols that act as
obstacles to deployment, and in particular reduce the costs (monetary,
support, reliability, usability, performance, etc.) of using more
secure protocols.
- although the assumption of intrusion-resistant end-systems is necessary
for security, the operating systems running on most machines (particularly,
but not exclusively, Microsoft Windows) do not adequately support it.
It's like building on sand.
- most security problems are treated as just implementation bugs to be
patched. This does not address the fundamental design flaws that lead
to these problems being so common and having such serious effects,
including in particular:
* use of unsafe programming languages (that is, languages in which
common errors cause undefined behaviour)
* the property of conventional operating systems that programs
run by a user almost always act with the full authority of the user.
- even where implementations of systems correctly support secure protocols,
they are often configured to be insecure by default; insufficient attention
is paid to reducing the effort needed to produce a secure configuration,
to make this effort incremental as users start to make use of functions
that require configuration, and to reduce potential sources of error.
- user interfaces do not give the necessary information for users to make
informed security decisions, or else give too much information and do
not make it clear what is important. There is hardly any HCI testing of
security interfaces.
- there is an unhelpful perception that security and usability are necessarily
in opposition, which leads to system designers being satisfied with designs
that are not good enough from the point of view of being simultaneously
secure and usable. The paper "User Interaction Design for Secure Systems"
at <http://www.sims.berkeley.edu/~ping/sid/> is essential reading. Here's
an important point from its introduction:
Among the most spectacular of recent security problems are e-mail attach-
ment viruses. Many of these are good real-life examples of security
violations in the absence of software errors: at no point in their
propagation does any application or system software behave differently
than its programmers would expect. The e-mail client correctly displays
the message and correctly decodes the attachment; the system correctly
executes the virus program when the user opens the attachment. Rather,
the problem exists because the functionally correct behaviour is
inconsistent with what the user would want.
--
David Hopwood <david.nospam.hopwood@xxxxxxxxxxxxxxxx>
_______________________________________________
Ietf@xxxxxxxx
https://www1.ietf.org/mailman/listinfo/ietf