RE: There should be a design pattern, not patterns.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



For what it's worth, WebFinger [RFC 7033] is already there to provide a simple practical discovery solution that's deployable today that doesn't require DNS gymnastics.

-----Original Message-----
From: ietf [mailto:ietf-bounces@xxxxxxxx] On Behalf Of Phillip Hallam-Baker
Sent: Wednesday, August 20, 2014 8:59 AM
To: IETF Discussion Mailing List
Subject: There should be a design pattern, not patterns.

The biggest weakness in the way that the IETF approaches problems is that it is overly reductionist. Problems are separated out into 'DNS'
and 'PKI' parts, privacy is separated from integrity. Which would be fine if the problem we face is that we can't do privacy or integrity or whatever as we can.

The problem we face with Internet protocols today is that security falls through the cracks in the architecture. And that isn't a surprise because those cracks are precisely the point where the attacker is going to place a jackhammer and pound.



If Internet architecture means anything then there should be one canonical approach whereby a client discovers how to connect to a service. This comprising discovery of:

* The IP Address
* The Port
* The Transport protocol
* The Transport layer security parameters.
* The Application layer protocol
* The Application layer security parameters

Obviously DNS should play a major role in this discovery scheme but the mere fact that a claim is made authoritatively in the DNS does not necessarily mean that the client should accept that assertion or connect.

There should however be one canonical discovery approach and it should not be a protocol designer's choice. The decision to use RFC822 style headers, XML or JSON is a legitimate design choice. How to do discovery is not. There should be one way to do it.


This is why I was critical of DANE which was sold as a key discovery protocol but became a security policy mechanism. Now we need a security policy mechanism but DANE is not that mechanism. Because DANE begins with the assumption that you are going to use TLS and that should be something that the security policy tells you to do.

DNS should be a one stop shop telling the client everything it needs to connect. And one requirement for that is the ability to authenticate the authoritative statements of the DNS name owner and this in turn should be the driver for DNSSEC.

As far as a typical domain name owner is concerned, the total cost of ownership of DNSSEC is north of $3000 today. Thats the cost of either training staff to do the deployment or finding someone capable of doing to them. Expertise is expensive and most people who work in IETF have no idea just what the gap is between their skills and the typical Internet network admin. Walk through the 'sales' room of any CA and you won't hear much sales going on. The talk is all 'and now open up your apache.conf file...' So positioning DNSSEC as enabling a free replacement for the cheapest WebPKI certificates ($50 or less) is beyond clueless. Not only is the sales proposition a bad one.

But much more importantly, it is actually selling DNSSEC short.

DNSSEC should be the enabler for the next generation of Internet security, the enabler for comprehensive security policy. If that is on the table then it is a simple matter to roll out DNSSEC as an upsell to every DV certificate holder. We have the customer support people who can do the necessary hand holding to get someone's DNSSEC deployed and security policy set correctly. And if we can do that in bulk we can do it for cheap.


Unfortunately there are some problems with the basic DNS protocol that make it less than ideal for that purpose. These are

1) One request and response per round trip. Although the DNS protocol allows for multiple queries per request, the response only has one slot for the status result which makes multiple requests per packet almost unusable.

2) Requests are not authenticated in any fashion, not even the minimal TCP authentication of returning an acknowledgement. And so the protocol is open to abuses such as amplification attacks that are exacerbated by DNSSEC.

3) Cruddy middlebox implementations that enforce obsolete constraints, sabotaging attempts to overcome legacy constraints.

4) Lack of confidentiality protections.


Now we get to the part that I find incomprehensible. We all agree that the DNS client-resolver protocol is a major obstacle in DNSSEC deployment. So why on earth are we discussing proposals that ONLY encryption to address the privacy issue when we can solve ALL FOUR problems for the same cost?

Back in the Louis Freeh days we made a big mistake in insisting that all security be end-to-end or nothing at all. Guess what nothing at all won. We were so obsessed with stopping Louis Freeh we were blind to the fact that we had designed a system that most Internet users could not and therefore would not use. We would be making a similar mistake if we decide to make the DNS client-resolver protocol secure against disclosure attacks but fail to address the other limitations that make it generally unsuited as a general discovery protocol.


So what should we do instead?

First we need to fix the DNS protocol with a two stage protocol that provides:

1) A lightweight key exchange with mandatory server authentication and optional client authentication capabilities. [We can build this out of TLS]. The result of this exchange being an opaque identifier (i.e. a
ticket) and a shared secret.

2) A stateless UDP query response mechanism that provides authentication and confidentiality protections in which requests are a single packet but responses can be multiple packets. While fitting enough information into 500 bytes or 1280 bytes is very hard, two packets is plenty for most cases and even the most complicated discovery cases rarely require more than four.


Reforming the DNS protocol in this fashion is actually a lot simpler than the competing proposals but it solves all the problems with the DNS protocol, not just the cause du jour.

The DNS client is going to connect to a DNS resolution service and establish a service connection that potentially persists for a very long time. If the device is an encumbered one it might be years.


It is now possible to make a complicated DNS discovery request for the same latency cost as traditional A record look up:

Traditional query:
   example.com ? A

Complex discovery
   example.com ? A
   example.com ? AAAA
   _http._tcp.example.com ? SRV
   _http._tcp.example.com ? POLICY
   _80.example.com ? TLSA

Even though we are making 5 queries instead of one, this easily fits into one UDP request packet. And the resolver can easily return all the answers and the full DNSSEC query chains in 4 to 5 packets without trouble.


In about 3% of cases it is not possible to make UDP queries. This is pretty much the same as the number of cases where port 53 DNS is sabotaged. So a fallback strategy to a HTTP web service is required to provide a connection guarantee. But while it is essential for a discovery process to work in 100% of network situations we do not require latency to be optimized in every circumstance.

Now this is a protocol design pattern. It is completely general. It leverages legacy SRV and TLSA records where they are present but we can make use of new record types such as the POLICY record where the design does not begin with the requirement to be backwards compatible.


The DNS resolver is always a trusted function. If you don't trust the resolver then go to the authoritative servers directly. Embracing the fact that the resolver is trusted means that we can leverage it to provide us with additional services. We could even use it as a kerberos ticket broker which isn't something most folk would go for today but that approach will suddenly become inevitable should someone ever come close to building a quantum computer.






[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]