On 2/28/20 1:58 PM, Phillip
Hallam-Baker wrote:
On 2/28/20 1:12 PM, Tom
Herbert wrote:
> Yes, but in BSD sockets, the most common networking
API, "bind" takes
> an address and port number argument, "connect" takes
and address and
> port number argument, getsockname and getpeername
return the
> respective pairs set on a socket. So the TCP 4-tuple is
very visible
> to applications and has been for many years. If there's
a better way
> to do this that hides this and makes it easier I say go
for it, but
> please don't call this a solved problem until you've
achieved
> ubiquitous deployment and we can obsolete the sockets
API since no one
> is using it anymore.
And in particular any API that presumes that DNS will be
reliable and
have the correct address for a peer, or even that it exists
at all, is
going to suffer from a huge disconnect from reality.
The beautiful thing about only needing an address and a port
(and often,
having a default port) is that it doesn't need any
higher-layer
infrastructure to make it work. This is a feature, not a
bug.
OK we are
officially done.
DNS is
correct by definition. The resolution of
example.com
is not a matter for discussion, it is what the DNS
resolution service you decide to trust returns.
Sorry, no. DNS can be, and is often, out of sync with
reality. And when DNS fails, as it often does, the IP address
still works. That's why millions of local networks use static IP
address assignment, because it's easier to just use IP addresses
than to make sure that DNS always works. For example, when the
DNS servers are located elsewhere and the local network gets
disconnected.
You can claim that such sites should have multiple network links
and provision their DNS better. But you don't get to impose your
requirements on others' operations.
I have
given the application layer view of the transport layer and
below. I really don't care if you think we are doing it
wrong, that is how we is going to continue to do it.
I'm glad you don't get to dictate how others write applications
or operate their networks.
There are
of course concerns that arise from attempting to use
manually configured DNS as the basis for discovery. Manual
configuration is unreliable which is why I intend to
automate the process at some point.
There
should be a single point of management for service hosts in
a domain.
Uh, no. Single points of failure make everything unreliable.
ESPECIALLY management.
Hosts
should register/deregister the services they provide as they
come up and down.
Uh, no. There are lots of issues with that, including not only
degraded reliability but also privacy.
This
should also coordinate with the provision of certificates.
And the DNS config should be calculated from that. So all
the DANE/SPF/TXT/SRV records should be generated
automatically.
But if you
config manually and people can't reach your service, that is
because your DNS is configured to say your service is down.
That is not a protocol failure.
No, if you design an application so that it needs unreliable
services in order to operate reliably, it's a design failure.
Keith