Sent: Wednesday, December 14, 2022 3:30 AM
To: ietf@xxxxxxxx <ietf@xxxxxxxx>
Subject: Re: [Rfc6761bis] New Non-WG Mailing List: rfc6761bis
On 12/13/22 15:18, Joe Abley wrote:
Hi Warren,
On Tue, Dec 13, 2022 at 17:03, Warren Kumari <warren@xxxxxxxxxx> wrote
I think that the root issue is that there is not a simple way to determine if a name is included in "that subset of the domain namespace that is used by IETF protocols like the DNS".Perhaps part of the problem is phrasing that particular question as though it has a simple answer. The nature and composition of the namespace depends on the vantage point.
I think the IETF has done fine for years understanding what the DNS/zeroconf/dnssd/whatever namespaces looks like without needing a strict definition, with due deference to Robert Persig and apologies for the oblique suggestion of quality.
People have been intentionally causing name collisions with hosts.txt overrides, nsswitch.conf, locally-served zones, etc, etc for decades and it has not caused any great indigestion.
I might disagree here. I've seen too many cases in which applications were broken by these mechanisms. To the extent that these mechanisms work, they rely on users and/or IT people magically understanding which names will cause problems if overridden
and which ones won't. The more diverse the name space, the less likely that they are right. And the diversity of the name space can only be expected to increase over time.
And yet, I'll admit that the ability to override a name lookup is often useful in corner cases, and sometimes essential. Such as: blocking lookups of DNS names known to be used by malware.
Unintentional name collisions occur every millisecond. Domains exist in some networks that do not exist in any others. Domains are missing from the perspective of some clients that are very much alive and well to the majority. Zones do not propagate instantly. People mess things up.
Every single one of these things represents an opportunity to further bifurcate the domain namespace. The resulting uncertainty has caused no end of angst at ICANN when they have decisions to make about new TLDs but does it make the understanding of what the DNS namespace is in the IETF any more difficult in practical terms?I think we need to step back and ask what it is that we are trying to achieve.
I like to say that part of IETF's job is to "make the Internet safe for applications". What I mean by that is that applications need to have a fairly predictable environment to operate in, no matter where they're operating from. If applications can't
trust the network and network infrastructure like DNS to do their jobs, the applications have to make unreliable guesses, and the result is a mess.
That predictable environment relies significantly on protocol standards. (It also relies significantly on operational standards, which IETF does less well, and is less well placed to encourage or maintain).
I would propose this as a criterion with which to evaluate both protocol standards and operational practices, whether or not related to DNS.Without answering that question, I think it's at least useful for us to specify what it takes to make applications operate predictably. Whether we can require those rules to be followed is a different question.Are we in the business of asserting control over the whole domain namespace? Do we want to be the self-appointed, enforcement-free gatekeepers who decide who can use what for what?
How is the special names registry actually used? We have asserted that it is useful. Was it instantiated mainly to justify a desire for the use of LOCAL or does it really have a higher purpose?
Is it helpful for things to be included in the special names registry because the way they were assigned or registered is special, even though there is nothing special in how they are used?
Consistent with the reasoning above, I would claim that the special names registry is potentially useful if applications can make use of the information in that registry to improve their reliability/predictability, or to be better at detecting problems that arise from the use of those names.
e.g. applications should not expect "something.local" to mean the same thing everywhere.
Why is LOCAL better than LOCAL.ARPA?
It's easier to type, and presumably more likely to be used. Also, for better or worse, the .local convention was widely deployed before there was any attempt to standardize it. Assuming for the purpose of argument that there was a valid need for something
like .local, it was probably easier to use .local than to try to get the vendor and user communities to migrate to some other (and probably uglier) convention.
Incidentally, .local is a good example of how name collisions can cause problems. My home router has a DNS resolver that it advertises via DHCP. That resolver tries to answer lookups for .local based on who knows what voodoo, sets .local as a default suffix
for DNS lookups, and responds to PTR queries with answers ending in .local. This conflicts with my computers' use of .local to mean "something to be looked up using mDNS". Some of my computers disable mDNS lookups when they see the DHCP server advertising
it, others don't. Which basically means ".local" is worse than useless on my home network - it's both unreliable and inconsistent.
How was it that homenet specified use of the HOME domain apparently without enough scrutiny and how do we prevent that from happening again?
We could start by deprecating .home and discouraging its use, just as we deprecated IPv6 site-locals. (I'm not stating a firm position that we should deprecate .home, though offhand I think it was a Bad Idea. Rather, I'm saying that we do have some precedent
for deprecating bad ideas that were once standards.)
Keith