On 2/28/20 7:29 PM, Phillip Hallam-Baker wrote:
The failures are at least as likely to be human errors as any other kind, but such failures are common. And people _are_ using IP addresses, quite frequently. Not so much for the web or email, but static allocation and configurations based on IP address are quite common on local networks, within enterprise networks, and even sometimes between enterprise networks. From a certain perspective, DNS is seen as something that's relatively unnecessary and something else to fail. And when, say, a factory production line stops because of DNS lookup failure, someone's head is going to roll if it happens twice. Also, there are organizational issues at play here. DNS is often seen as the purview of "IT" (I hate that term, but it seems to have stuck). "IT" often has a different view of its responsibilities than, say, manufacturing or engineering, and "IT" is widely seen by other technically competent people as an impediment to getting work done or shipping product. From that perspective, DNS is better avoided. (I remember a recent case in which someone in marketing at
a corporation (which I shall not name publicly), directed a
low-level IT person to change the NS records associated with a
subsidiary company's domain, because the company had hired a new
web designer, and the web designer decided that the
easiest way to make the transition was to set up a new DNS server
for the company at a different provider,... and in doing so broke
every other application used by the subsidiary company, including
every application used for order processing and customer
support. This is the kind of idiocy one finds in the "real
world", and unfortunately, DNS contributes to the problem.)
Your view of "the vast majority of applications" may not be
accurate, because as far as I can tell "the vast majority of
applications" aren't specified in readily available documents. I
know of a lot of private applications running over raw tcp, for
example. But even applications that use HTTP as a substrate
quite often aren't hitting HTTP servers on the public Internet
that need such provisioning. There are lots of small Internet
appliances that have HTTP interfaces that can be used for
monitoring and/or configuration. It's a really large and diverse Internet, and there are lots of
use cases that are important to people, that don't look like the
statistically typical case. IMO IETF needs to be aware of this
and to not presume that everything that matters is running in the
cloud, supported by redundant high-speed links and
well-provisioned DNS. Many environments don't have DNS service
for internal hosts, only for publicly visible hosts. Keith
|