Re: Proposed DNSSEC Plenary Experiment for IETF 74

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 28, 2008 at 08:58:19AM -0800, Bill Manning wrote:

> 	a linked suite of signed zones with the DNSKEY/DS records
> 	imbedded in the parents zones, all the way to the root zone,
> 	and or a look aside system where these records are kept
> 	constitutes DNSSEC deployment.
> 
> 	end point visability or use of this chain of custody is 
> 	immaterial to DNSSEC deployment.
> 
> 	Is that really what you are trying to say?

Maybe sort of.  My point is that I don't think it's obviously bad if
we have no DNSSEC-aware applications or if end points on the network
start seeing lookup failures due to DNSSEC validation getting a bogus
result, and returning SERVFAIL to the end node.  In one way of looking
at things, people not being able to reach a site _at all_ because it
looks like there is a MiTM attack going on is a step forward.  It
will, however, be at least frustrating.

Immaterial, however, might be going too far, for at least these reasons:

1. DNSSEC-enabled operation is somewhat more fragile than the
operation to which we have become accustomed.  (Note that other tricks
travelling under the guise of additional forgery resilience, changing
the way caches are populated, are also likely to increase that
fragility.)  So we'll see more failures than are actually warranted by
attacks.  That makes me unhappy.

2. Failures of the sort in (1) may mean that people will decide DNSSEC
is too risky, and turn it off.

3.  Overloading SERVFAIL as a response means debugging will be
painful.  It's also ugly, because it takes a response code that used
to mean one thing, and makes it mean two (see other rants about
overloading data types for why I hate this).

The fact is, however, that we're attempting to graft some data
assurances onto a distributed, loosely-cohernet database designed with
extremely naive assumptions about data validity from the network.  If
we want that feature, and we don't want to have to wait until the
entire Internet upgrades its infrastructure, we will have to live with
some very unhappy compromises, while noting that if you replace all
your infrastructure, you get a greater benefit.  That seems to me to
be the only realistic deployment strategy.  And I sure think it's time
we deploy our own stuff: that I couldn't get proper responses from
`dig +dnssec` in Minneapolis is, I think, a serious failure of the
IETF to eat its own dog food.

> > My personal reading of the current specifications is that, if you have
> > at least one path to validation, then validation is supposed to work.
> > So search rules ought not to be needed.  What the implementations
> > actually do is currently at variance with my interpretation, however.
> 
> 	I think the problem occurs when you have -two- paths to
> 	validation and the answers conflict.

Right, but my _personal_ reading of the RFCs is that they are
perfectly clear on how this is supposed to work.  As it happens, they
don't have a feature that many people seem to want.  Pity that feature
request didn't come in sooner, but I guess we'll have to come up with
something to accommodate it.

A

-- 
Andrew Sullivan
ajs@xxxxxxxxxxxx
Shinkuro, Inc.
_______________________________________________

Ietf@xxxxxxxx
https://www.ietf.org/mailman/listinfo/ietf

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]