These comments were sent to the IAB already. I was encouraged to
send them to the general IETF list. This is mostly a re-posting of
the comments, with one added paragraph (there's marker there).
The referenced document is:
http://www.ietf.org/internet-drafts/draft-iab-anycast-arch-implications-01.txt
It's hard to make comments on a document whose mission is not at all
clear. The problem I have is that the document has a faulty baseline
and incorrectly assesses extensions and variations. Having spent 15
years with the DNS and having to come to a deep architectural
understanding of it in order to define DNSSEC, my view of the DNS is
vastly different than that documented by the IAB. With this it is
hard to tell what the document is trying to guard against. Or push
towards.
Starting with what the DNS is and why it exists has to recognize that
a lot of work we think is native to it actually preceded it in the
/etc/hosts.txt file and in similarly built systems. DNS is not
principally there to translate names to numbers as the draft opens
with, although that is a high-profile use case. The DNS did not
define a uniform naming scheme. DNS is there principally to build on
the previous solution (a text file distributed once a week from a
central location).
If I were asked to list the bullet points that describe the DNS core
competency, they would be
- availability
- resilience
- speed in response
- speed in propagating data
- distributed management
- neutral
Where the DNS is weak is in the data plane and the management plane.
The basic lookup/search algorithm is stilted, inflexible, and hard
for many to understand. The biggest dilemma that it faces is that in
order to be strong it uses an unreliable transport substrate as it's
primary communication mechanism, the same substrate that is the
source of the protocol's chief technical challenges.
In a way, DNS is a balancing act, it is all about using UDP "right."
To a lesser extent, the fact that the DNS is not a client-server
protocol, as it is usually treated in text, but a client-cache-server
protocol complicates understanding it's architecture. This is the
very reason DNSSEC exists, the ingrained middle-man in the
client-server exchange. Attempts to empower the middle-man are the
chief obstacle to improving the overall health of the DNS and it's
cooperation with other protocol systems.
Over time there really hasn't been progress in the way of making the
DNS support applications. Despite what is in the IAB draft, there
has been just one application that is built into the DNS, done at the
"dawn of time" and not even in a significant way. Mail is the only
application with built in support in DNS. No other application has
ever changed the DNS definition. To understand this we should look a
chronology of changes to the DNS concept and specification.
The original DNS specification is as defined in STD 13's documents.
I won't bore you with repeating what's there, except to point out
that a few seminal types are included in the original set.
It is also important to quantify what's meant by DNS support for an
application. The baseline for any type is, given a name, class, and
type, the returned value will have a set of data. Any resource
record type that follows this baseline is considered to have "no
special processing," the label put on sensitive types.
Types that have no special processing include the A, AAAA, and PTR
records. Some of these types do contain domain name that can be
compressed and this is confused as special processing - but that is
not special to the protocol, just special to the marshalling of the
parameters. Surprising to some folks, SRV and NAPTR also have no
special processing, in fact aren't even subject to message
compression. This point is one I have to make because the draft uses
SRV and NAPTR as evidence of application scope creep into the DNS.
Types that do have special processing include this list (not
exclusively) SOA, NS, MX, CNAME, DNAME, and the DNSSEC types. These
are the types that are cause considerations for the DNS. MX is the
mail application dedicated type. It's impact is that it causes
additional data (address records) to be included in a response. A
very lightweight action, but special processing nonetheless.
One of the concerns that is conflated with NAPTR, TXT, DNSSEC and
IPv6 records is the concern over protocol data unit size. This issue
is generic to the DNS and stems from the UDP transport "dependency"
and not on application demands. (Back to the balancing act involving
UDP.)
Back to the chronology, looking for application-intelligence creep.
These are what I consider to be the turning points in DNS history:
RFC 1995/1996 - event-driven and incremental updates of zone contents
RFC 2136 - dynamic updating of a zone's contents
RFC 2181 - reduction in gullibility of caches
RFC 2308 - negative caching and the introduction of mnemonics
RFC 2671 - extended DNS header/trailer
RFC 2672 - DNAME (non-terminal query redirection)
"Views" - BIND's feature supporting query-source tailored responses
RFC 4033 - DNSSEC (cache poisoning defense)
If one gains an understanding of those RFCs (and the BIND feature),
they will have a workable impression of the DNS. Not complete and
with some gaps, but the architecture would be roughed in.
Internationalized domain names have no impact on the architecture of
the DNS, the same as for the DDDS and SRV record. IDNs and SRVs do
rely on their own naming conventions, but that isn't an issue for the
DNS. SRV's naming convention obviates the use of wildcards, but that
is not a concern for the DNS protocol's architecture.
The recent push to support tailored responses, that is, responses
based on QNAME, QTYPE, QCLASS plus information about or inside the
query message, has been happening for nearly a decade. I don't
recall the exact version of BIND that introduced Views, but I do know
I was using it in 2003 when I worked at ARIN. Views was in response
to outside demands, unknown to me, and I was aware of statefull
firewalls that did something similar back into the late 90's. This
effort is sustained by latent demand, not a provider's recent desire.
(Paragraph added for IETF list) In my opinion, merely adding a RR
type to the DNS does not represent an architectural change which is
why I launched in to this history. Applications have added record
types over time. The addition of a type is not a significant event,
in fact, it is an event that has not happened frequently enough. To
me, significant changes to the architecture would include updates to
the algorithm in RFC 1034, section 4.3.2. To date though, that
section has only been updated for DNAME and fixed for CNAME (in RFC
2672 and 4592 respectively). Architectural changes may not impact
that algorithm, but such changes signal architectural impacts more so
than a newly defined type.
So far I've summed up the reasons why I disagree with the beginning
of the document. After writing this I went back to the document
again to see if I better understood the point of the draft.
In section 4 there are four guiding principles that center on
enforcing the idea that there is complete coherency across queries
for QNAME, QTYPE, and QCLASS. In my opinion this is archaic.
Although the IETF has ignored in-coherency in DNS (the failure of
http://tools.ietf.org/html/draft-krishnaswamy-dnsop-dnssec-split-view-04
to even become an WG item) in the "real world" this has been going on
since the late 90's.
One reason why such in-coherency exists today, having stood the test
of at least a decade in operations (running code) is that
in-coherency is not a radical idea despite the perception. In the
original framing of the DNS data was static, quickly a dynamic nature
was added. In a static world, coherency is a measurable and
achievable condition, with dynamic elements, it is not. Seeing a
zone change rapidly is isomorphic to a server responding differently
based on source IP address when examining the traffic crossing port
53. From this observation, combining the concept of tailored
responses with the DNS is fairly easy.
Later in section 4 there are five other cautionary tales. Most of
these are "not a problem" today.
One of these describes a true concern - relying on knowledge of the
DNS management model to assist in authorization determination, that
is, knowing where zone cuts exist. An application that relies on the
management of data in another application has abdicated it's
authority and will suffer the consequences at some point. This is
not unique to the DNS.
Mapping real world objects or concepts to domain names can be as
complicated as anyone wants. For example, naming routers with
multiple interfaces, a problem seen on NOG lists periodically. The
complexity in the mapping is immaterial, so long as the application
developers can do it consistently. Perhaps the most complex mapping
is hashing, as is done for DNSSEC's NSEC3.
Sensitive data can be in the DNS if other protections are taken.
E.g., most companies consider their internal hardware topology to be
sensitive but still run DNS for the benefit of their employees. They
then keep this DNS "private." (Again, this practice is not
documented by the IETF.) Some ENUM work has been done to make ENUM
work in places where telephone number information is considered
sensitive.
The remaining points are outmoded. While it is true that
synchronizing zone contents with external sources of data take work,
it can be made to work. Maybe the misperception is that this is only
about the DNS. But sometimes putting more work on the DNS alleviates
other systems. In this case, the DNS can be operated as it should
be, making use of the dynamic update features the IETF approved and
the zone data propagation approaches already defined with the result
of assisting in traffic management. Not perfectly, but helping.
The recommendations in section 4 are arguable. There are fairly
solid arguments why the recommendations (except perhaps one of them)
are wrong. I am not saying the recommendations mentioned have merit,
but they are not worthy of being backed by the IAB.
In section 3, well, in trying not to get into a blow-by-blow
discussion, I'll summarize my thoughts that the IAB appears to be
saying "avoid these things because they will be bad" even though some
of them have been in play for more than a decade. It's like a parent
telling their teenager not to smoke after the teen has already
developed the habit.
I have to mention 3.3.1. The red herring about the size of data in
DNS has been around for a long time. There is no issue. TCP is not
a problem for DNS, at least architecturally speaking. Yes, some
implementations don't have this right, but the situation can be
fixed. Inside the DNS, size is just a number. All applications have
size issues. All that is needed is to keep in mind what the
strengths of the DNS are, and let applications try to live within the
constraints.
If the IAB is going to make such a statement, they should remind
applications what the strengths and limitations of the DNS are and
have recommendations on who to solve generic problems. As written
the document sounds like a sad, almost pathetic plea for folks to
stop innovating with the DNS. It's almost as bad as the IAB
statement on a unique root, which caused more problems than it
solved. (I've never bothered to read the the "extend DNS" document
after the first few revisions of it.)
Finally, as the document stands now, I'd rather not be listed as a
contributor. It's not fair to me to be associated with the draft's
architectural view. It is fair to say I've submitted comments, but
don't give the impression I endorse the document as it is.
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Edward Lewis
NeuStar You can leave a voice message at +1-571-434-5468
Me to infant son: "Waah! Waah! Is that all you can say? Waah?"
Son: "Waah!"
_______________________________________________
Ietf mailing list
Ietf@xxxxxxxx
https://www.ietf.org/mailman/listinfo/ietf