Re: [Last-Call] Last Call: <draft-ietf-tls-oldversions-deprecate-09.txt> (Deprecating TLSv1.0 and TLSv1.1) to Best Current Practice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/27/20 9:53 PM, Eric Rescorla wrote:

Keith,

Thanks for your note. Most of the general points you raise here were discussed
when the TLS WG decided to move forward with this draft [0], though
perhaps some of that is not reflected in the text. Of course that
doesn't make this points invalid, so I'll try to summarize my
view of the rationale here.

Your primary objection seems to be this has the effect of creating
interop problems for older implementations that are unable to
upgrade. This is of course true, however I think there are a number of
factors that outweight that.

First, while certainly is problematic that people who have
un-upgraded endpoints will find they cannot connect to modern endpoints,
we have to ask what is best for the ecosystem as a whole, and IMO what
is best for that ecosystem is to upgrade to modern protocols.

If you're going to try to state what's best for the ecosystem as a whole, you need to understand that "the ecosystem" (or at least, the set of hosts and protocols using TLS) is a lot more diverse than things that are connected to the Internet most of the time, well-supported, and easily upgraded.   The idea that everybody should be constantly upgraded has not been shown to be workable, and there are reasons to believe that it is not workable.

When you give advice that is unworkable because it is based on dubious assumptions, you might not only cause interoperability failures, you might end up degrading security in very important cases.

This
would be true in any case, but is doubly true in the case of COMSEC
protocols: What we want is to be able to guarantee a certain mimimal
level of security if you're using TLS, but having weaker versions in
play makes that hard, and not just for the un-upgraded people because
we need to worry about downgrade attacks.

This sort of sounds like a marketing argument.   Yes, in some sense we'd like for "TLS" to mean "you're secure, you don't have to think about it" but more realistically TLS 1.0, 1.1, 1.2, and 1.3 each provide different countermeasures against attack (and in some cases, different drawbacks, like TLS 1.3 + ESNI being blocked) and you probably do need to be aware of those differences.

While we have made efforts
to protect against downgrade, the number of separate interacting
versions makes this very difficult to analyze and ensure, and so the
fewer versions in play the easier this is. Asking everyone else to
bear these costs in terms of risk, management, complexity, etc. so
that a few people don't have to upgrade seems like the wrong tradeoff.
I don't think it's a matter of "asking everyone else to bear these costs".  I think TLS < 1.2 should be disabled in the vast majority of clients, and in many (probably not all) public facing servers, but some users and operators will need to have workarounds to deal with implementations for which near-term upgrades (say, for the next 5-10 years) are infeasible.

Second, it's not clear to me that we're doing the people who have
un-upgraded endpoints any favors by continuing to allow old versions
of TLS. As a practical matter any piece of software which is so old
that it does not support TLS 1.2 quite likely has a number of security
defects (whether in the TLS stack or elsewhere) that make it quite
hazardous to connect to any network which might have attackers on it,
which, as RFC 3552 reminds us, is any network. Obviously, people have
to set their own risk level, but that doesn't mean that we have to
endorse everything they want to do.

Yes, but you might actually increase the vulnerability by insisting that they not use the only TLS versions that are available to them.

There's a lot of Bad Ideas floating around about what makes things secure, and a lot of Bad Security Policy that derives from those Bad Ideas.   But practically speaking, you can't change those Bad Ideas and Bad Policies overnight without likely making them much worse.   People have to actually understand what they're doing first, and that takes time.  And there are a lot of things that have to get fixed besides just the TLS versions to make some of these environments more secure.   Lots of people having to make security-related decisions simply haven't managed to deal with the complexity of the tradeoffs, so it's really common to hear handwaving arguments of the form "the only threat we need to consider is X, and Y will deal with that threat".   And of course that's not anywhere nearly true, and Y is woefully insufficient. None of which is a surprise to you, I'm sure.

Personally I often ask myself "what does it take to get devices in the field regularly upgraded?"   And it's not a simple answer. First thing that's needed is an ongoing revenue stream to provide those upgrades in the first place, and there's a huge amount of intertia/mindshare to be overcome just for that.   Second thing is you have to keep device vendor corporate overlords from repurposing that money, or by sabotaging their products in various ways, say by firing all of their domestic software engineers and outsourcing that work to contractors in another country where the level of security expertise is even lower and there's no institutional memory that keeps track of how things need to work.    Then you have to convince customers not only that regular updates are a good idea, just like preventative maintenance (when many of them have very good reasons for not doing anything that could disrupt their operations), but also have to arrange for ways for those upgrades to be convenient and explicitly managed in a way that works consistently (or reasonably so) across all of their vulnerable devices.   (and no, they're not going to let those devices talk to anything on the public Internet, and would have quite an uphill battle arranging for VPNs to hardware vendors). And there's still the problem of isolated sites that are disconnected from the Internet.  (which of course doesn't make them safe, we know how Stuxnet worked, but it probably does make them safer than any sort of other near-term arrangement that is likely to happen)

I've tried to fight these battles many times and always lost. The more pragmatic thing to do is inform operators of the risks and let them figure out how to manage them, not insist that everyone follow the same policy.


Finally, as is often said, we're not the protocol police, so we can't
make anyone turn off TLS < 1.2. However, we need to make the best
recommendation we can, and that recommendation is that people should
not use versions prior to TLS 1.2.

Ok, but the draft says MUST NOT.

If people choose not to comply,
that's of course their right. We were certainly aware at the time
this document was proposed that some people would take longer than
others to comply, but the purpose was to move the ecosystem in the
right direction, which is to say TLS >= 1.2. I believe that a MUST
is more effective than a SHOULD here.

-Ekr

P.S. A few specific notes about your technical points here:

> But some of those embedded devices do support TLS, even if it's old
> TLS (likely with self-signed certs... TLS really wasn't designed to
> work with embedded systems that don't have DNS names.)

This is not correct. TLS takes no position at all on how servers
are authenticated. It merely assumes that there is some way to
validate the certificate and outsources the details to application
bindings. For instance, you could have an IP address cert.

That's technically correct of course, but I think you know what I mean.   Without a reliable way of knowing that the server's certificate is signed by a trusted party, the connection is vulnerable to an MITM attack.   And the only widely implemented reliable way of doing that is to use well-known and widely trusted CAs.   Yes, some implementations run their own CAs and issue their own server certs and install their own root/secondary certs in clients, and they can make the subjects of those certs IP addresses or pretty much anything they want.    I've done that myself.   But I've never actually seen a single industrial facility do that, or express interest in doing that, no matter how many other security measures they employed.


> For newer interactive clients I believe the appropriate action when
> talking to a server that doesn't support TLS >= 1.2 is to (a) warn
> the user, and (b) treat the connection as if it were insecure.  (so
> no "lock" icon, for example, and the usual warnings about submitting
> information over an insecure channel.)

I'm not sure what clients you're talking about, but for the clients
I am aware of, this would be somewhere between a broken experience
and an anti-pattern. For example, in Web clients, because the origin
includes the scheme, treating https:// URIs as http:// URIs will have
all sorts of negative side effects, such as making cookies unavailable
etc. For non-Web clients such as email and calendar, having any
kind of overridable warning increases the risk that people will
click through those warnings and expose their sensitive information
such as passwords, which is why many clients are moving away from
this kind of UI.
UI design is a tricky art, and I agree that some users might see (or type) https:// in a field and assume that the connection is secure.   But I think it's possible for UI designs to be more informative and less likely to be misunderstood, if the designers understand why it's important.     I also think that IETF is on thin ice if we think we're in a better position than UI designers to decide what effectively informs users and allows them to make effective choices, across all devices and use cases.

Keith


--
last-call mailing list
last-call@xxxxxxxx
https://www.ietf.org/mailman/listinfo/last-call




[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux