Re: Near-Real-Time TLS and DNS Validation using a Multi-Vantage-Point Network of Secure Mirrors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I mistakenly sent a short reply only to Nick - you'd think at some point I'd learn how to use email, but there we are.

I'll fill out my argument here - same argument, just a bit more detail.

On Thu, 15 Aug 2024 at 00:30, Nick Lockheart <lists@xxxxxxxxxxxxxx> wrote:
 ** INVERTING DANE **

With DANE and DNSSEC, the logic goes: if the DNS record is signed by a
trusted authority, then the browser can trust the DNS records that
identify an unknown Certificate Authority.

But this logic works the other way, too. If the browser already knows
the fingerprint of the CA Certificate used by example.com, but the
browser does not know the IP address, and the browser does not trust
the DNS system, then, if the browser asks the untrusted DNS for a
resolution of example.com, and the browser gets an IP address from the
DNS, and the browser does a TLS handshake with the server at that IP
address, the browser can check the fingerprint of the server's
Certificate and know if the DNS was honest.

Or in other words, cryptographically-secure DNS could verify a
Certificate Authority, but a Certificate Authority could verify
insecure DNS.

Thus, the real problem is: how can we bootstrap trusted information
into the browser?

I don't think this logic *does* work the other way.

The non-DNSSEC case with PKIX currently works because your trust anchor is a list of CA certificates (I'll ignore "fingerprint" here because I think that's incorrect and an implementation detail). The browser (or XMPP server, or whatever) performs some form of endpoint discovery via the untrusted DNS, and the net result is that the identity can be verified irrespective of the trustworthiness of the DNS. That last is significant, because the DNS is still not trustworthy after all this - maybe there were some duff CNAMEs, or some bogus AAAA records inserted, but it doesn't matter because we can prove a chain of trust back to our trust anchor entirely independently of the DNS. This is all tantalisingly close to your suggestion, mind, except that the DNS is still not trusted.

With DNSSEC, things are different - here, we have a trust anchor that comprises a list of CAs *and* a DNSSEC root, so we have multiple paths we might use, and the DNSSEC one might override, or alter, the allowable paths through the PKIX one. DANE, then, doesn't exactly verify a Certificate Authority, either.
 


  ** A BOTTOM-UP TRUST CHAIN **

We need a chain of trust, but it doesn't need to be top-down, it could
also work bottom-up.

We could build a trust model that works the other way around.


Again, I don't think you really can.

You have an entity you're trying to verify, and an entity that you trust, and you build some kind of chain of trust between them.

When you're doing PKIX, you start with the End Entity Cert and work your way back to a trusted CA (and from that, to the browser or operating system vendor, typically). Typically the End Entity is kind enough to give you a chain that gets you all (or almost all) the way back, so the job can be quite easy - but in principle you can use various discovery methods during path building and go grab the intermediate certificates over HTTP based on the information encoded in the EE cert, and I'm pretty sure that OpenSSL (for example) internally throws all the chain certificates into a bucket and builds its own chain from them.
 
Consider this example:

I don't want to pick on anyone, but it's easier to explain if we use
absolute names here, so let's use Firefox as an example.

The Mozilla Foundation rolls its own CA. This CA key pair is internal
to Mozilla, and doesn't need signed or approved by anyone.

Mozilla uses the Mozilla Foundation's CA to sign all of the server
Certificates for their own services.

This Mozilla CA does not sign anything other than services owned by
Mozilla.

Mozilla places the public CA Certificate of the Mozilla Foundation into
all of their browsers. This is the *only* CA Certificate that comes
with the browser.

Right now, the newly installed browser trusts no one. On first run,
this happens:

The browser makes a DNS request for mozilla.org and gets an IP Address
from local DNS. This DNS response is not trusted.

The browser establishes a connection to the IP address that it just
got, and does a TLS handshake. The Certificate returned from the server
is signed by the Mozilla Foundation's CA. Since the Mozilla
Foundation's CA is already known to the browser, the browser now trusts
that it is talking to the real mozilla.org server.

This creates a secure channel from the browser to the browser vendor,
which I will call BROWSER SECURE CHANNEL.

This BROWSER SECURE CHANNEL allows the browser to bootstrap trust from
the bottom up, starting by trusting itself, then by trusting its
vendor.


So this appears to be identical in essential form to the status quo, wherein the user trusts the browser vendor (Mozilla, here) to ship the browser with a "good" list of CAs to trust.

The idea that Mozilla Foundation's CA is not approved to anyone isn't true - it has to be approved (probably implicitly) by the browser's user, and as you note has to be trusted by the browser for mozilla.org, so you are in effect just hardcoding DANE so far.
 

  ** BUILDING OUT TRUST **

BROWSER SECURE CHANNEL lets the browser update itself from the browser
vendor, but it also allows the browser to request something called a
SECURE MIRROR BOOTSTRAP file.

This file, requested only once when the browser is first installed,
contains a hash/map of IP addresses and CA Fingerprints.

There would be about 100 IP-CA pairs in this file.

Each pair represents the identify of a SECURE MIRROR.

The bootstrap file is not an exhaustive list of SECURE MIRRORS. It is
only a small subset used for bootstrapping.

Once the browser has the SECURE MIRROR BOOTSTRAP file, it will contact
the SECURE MIRRORS on the list by their IP addresses, confirm that the
CA fingerprint in the TLS handshake matches, then, if the server is
validated by its CA fingerprint, the browser requests a SECURE MIRROR
LIST from that SECURE MIRROR.

The SECURE MIRROR LIST is much larger than the SECURE MIRROR BOOTSTRAP
file, and contains IP-CA pairs for substantially more SECURE MIRRORS.

The SECURE MIRRORS in this file are also grouped by region code.

The browser compares the SECURE MIRROR LISTS it receives from the
multiple SECURE MIRRORS it found in its SECURE MIRROR BOOTSTRAP file,
and makes a master list based on consensus.

This process occurs only once on browser install, and perhaps
periodically, such as when the browser version updates.


I'd suggest "secure looking glass", because while synonyms in normal language, "mirror" and "looking glass" have tended to have quite different meanings as terms of art in networking.

Two problems here:

Firstly, this is still fundamentally a matter where the browser vendor remains the trust anchor. You have not changed anything from the status quo. In principle, each initial secure mirror might hand out a completely different list and therefore abrogate their responsibility and break the chain of trust back to the browser vendor.

But in practice, the browser is going to make requests to these 100 secure mirrors and get the same list from each as it hand already, and so the list remains stable. To assume it changes radically is to assume that the initial secure mirrors all somehow collude to provide a different set, not including themselves, which seems... odd?

Secondly, who runs all these secure mirrors, and why? In my mistakenly-private note, I mentioned the word "economics", and you said that not everything was about money. This is true; economics is not all about money either, but it is about incentives and outcomes. What is the incentive for an entity to be a secure mirror provider?

At first this was unclear to me, but after a bit of thought, I realised that it's a really great way to find out what sites people are visiting. It's at least as good as DoH (Slogan: "We'll monitor all your DNS requests for your privacy").

I'm sure that some people will run one out of the goodness of their heart, or because they have explicit funding to do so, like DNS root servers - but these are going to get a lot more traffic than a DNS root server, so I'm looking for quite big, and typically monetisable, incentives.
 

** VALIDATING A CA **


I'm going to shoot some fairly random holes in this, but to be clear, the overall description appears solid to me.
 
Because the above actions happen in parallel, by the time the browser
has its A record from local DNS and has started the TLS handshake, it
should have responses from the three SECURE MIRRORS with CA information
from the DNS servers that each SECURE MIRROR queried.


Not true. I assume that secure mirrors operate over HTTPS, so unless you've got an ongoing connection to the secure mirrors - and we're pickign three at random for each request so probably not - then you have to go through the same connection and TLS handshake, and then wait for a response. On balance, then, I suspect this will significantly slow down initial connections.

You can, though, complete the TLS handshake entirely (I'm ignoring early data here) while waiting for the secure mirrors to respond, so it's not *as* bad as you write here; the trouble is that the secure mirror has to perform a TLS handshake with the same target at least to the point of getting the server's certificate and its signature in order to validate the data its about to send back, so you're adding significant overhead in practical terms.

This, incidentally, means that traffic to the requested site will quadruple in terms of the TLS handshakes, which will drive up the costs of hosting - or at least load balancing in most cases. Again, economics is not all about money, but what's the incentive here?
 
If all the fingerprints match, and all the A records match, the browser
trusts the CA and the IP.

You needn't compare fingerprints; you have the entire certificate. Mind you, you'll need a list of certificates and addresses, and how they relate, but that's all a soluble problem.
 

** HOW SECURE IS THIS? **

Let's Encrypt is considered very secure.

OK, hold up there. ACME is considered secure enough, but I don't think anyone assumes that ACME is as secure as it can possibly get.

As far as domain validation goes, this is arguably more secure than
Let's Encrypt, because it is done in near-real-time, which means a
revocation is also near real-time. Additionally, this method also
verifies the A record for the server.
 
Oh, wait now. Revocation isn't the same as "my certificate has changed", it can also mean "an attacker has taken complete control of my server". Your method so far proves that everyone else sees the same certificate and address records; it doesn't say much about compromise of either.
 

** HOW FAST IS THIS? **

Writing out a detailed process like this makes it seem slow. But since
the browser's request for information from SECURE MIRRORS happens at
the same time as the normal DNS lookup, and in parallel, this should
not add time to the page load.


Narrator: It *would* add time to the page load.
 

** WHO OWNS THE SECURE MIRRORS? **

Everybody with an interest or stake in the Open Internet is invited and
encouraged to operate their own SECURE MIRROR as part of a world-wide
SECURE MIRROR NETWORK.

This includes:

(1) Corporations
(2) Non-profits
(3) Governments(*)
(4) Universities
(5) Individual Enthusiasts


(6) Advertising Agencies
(7) Criminals
 
The requirements to operate a SECURE MIRROR are only technical, never
political.

(!)
 
To be a SECURE MIRROR in good standing, the SECURE MIRROR must:

(1) Have a dedicated IP Address.


What does this mean? Do you mean a unique IP address only used for this purpose? Or a unique IP address for each secure mirror that can be used for other purposes?
 
(2) Have a registered domain name.


You've stipulated that they are referenced only by IP address.
 
(3) Have an Average Ping Time to the next closest SECURE MIRROR under a
threshold.


How is this measured?
 
(4) Have a Request Response Time under a threshold.

(5) Have 99.9% uptime.

(6) Not deny any requests (cannot be behind an application firewall
with CAPTCHAs, for example).


Not deny *any* requests? Including DDoS?
 
(7) Have a very low percentage of DISSENTS.


(A DISSENT occurs when one SECURE MIRROR returns results that differ
from the other two. This can occur during DNS propagation.)

DNS doesn't propagate; you'd write a secure mirror to query authoritative servers directly.
 

There should be thousands of SECURE MIRRORS, if not more.

Each entity is allowed to operate only one SECURE MIRROR.


OK, how is *any* of this enforced?
 
A large tech company may operate many servers behind a load balancer,
but this is considered one SECURE MIRROR. The purpose of this rule is
to ensure that each stakeholder gets only one "vote" in the SECURE
MIRROR NETWORK.


How could anyone tell?

So here's a fun thought experiment - if I gradually rolled out a bunch of these secure mirror things, and carefully responded correct data to the others to keep mine appearing legit, could I start detecting whether the three a browser instance was using were in fact all mine, and from that start subverting TLS wildly? Add a DoH setup (for your privacy!) and I've complete control over your browser. Awesome.

Obviously I needn't run thousands of instances, just add a load balancer with a bunch of addresses on.

What prevents this? And if it's just numbers, does this mean the system will be exceptionally vulnerable to begin with? 
 
A SECURE MIRROR must implement the SECURE MIRROR PROTOCOL, which is
both a server and client protocol for a RESTful API, accessed only via
HTTPS.

Each SECURE MIRROR should have its own Certificate Authority, and use
that Certificate Authority to sign that SECURE MIRROR'S Server
Certificate, and nothing else.


Why does it need a distinct certificate authority? Why not just use a self-signed cert? It makes no difference, I'm just curious.

** CONCLUSION **

I am aware that I have omitted many small technical details. Some, I
have thoughts on, and some, I would welcome the feedback of experienced
software architects and network engineers in solving.

In summary, while I don't like the status quo much either, I don't think this is workable, or indeed would replace the status quo in any useful manner.

FWIW, I did think about doing something broadly similar, but specific to XMPP servers, a few years back and came to the conclusion it was at best a significant resource sink and at worst a nasty security disaster waiting to happen.

Dave.

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux