Hi Keith,
Thanks for your note. I think it's clear we see things differently,
and I don't think it's that useful to get into an extended back and
forth on those points. Accordingly I've done a fair bit of trimming to
focus on the points where I think you may have misunderstood me
(perhaps due to unclear writing on my part).
On Fri, Nov 27, 2020 at 7:39 PM Keith Moore <moore@xxxxxxxxxxxxxxxxxxxx> wrote:
> On 11/27/20 9:53 PM, Eric Rescorla wrote:
> > This
> > would be true in any case, but is doubly true in the case of COMSEC
> > protocols: What we want is to be able to guarantee a certain mimimal
> > level of security if you're using TLS, but having weaker versions in
> > play makes that hard, and not just for the un-upgraded people because
> > we need to worry about downgrade attacks.
>
> This sort of sounds like a marketing argument. Yes, in some sense we'd
> like for "TLS" to mean "you're secure, you don't have to think about it"
> but more realistically TLS 1.0, 1.1, 1.2, and 1.3 each provide different
> countermeasures against attack (and in some cases, different drawbacks,
> like TLS 1.3 + ESNI being blocked) and you probably do need to be aware
> of those differences.
Well, I can't speak to how it sounds to you, but it's not a marketing
argument but rather a security one. This is easiest to understand in
the context of the Web, where you have a reference that contains one
bit: http versus https,and all https content is treated the same, no
matter which version of TLS it uses. In that context, having all the
supported versions meet some minimum set of security properties is
quite helpful. It's true that TLS 1.0, 1.1, 1.2,and 1.3 have different
properties, which is precisely why it's desirable to disable versions
below 1.2 so that the properties are more consistent.
> > > But some of those embedded devices do support TLS, even if it's old
> > > TLS (likely with self-signed certs... TLS really wasn't designed to
> > > work with embedded systems that don't have DNS names.)
> >
> > This is not correct. TLS takes no position at all on how servers
> > are authenticated. It merely assumes that there is some way to
> > validate the certificate and outsources the details to application
> > bindings. For instance, you could have an IP address cert.
>
> That's technically correct of course, but I think you know what I
> mean. Without a reliable way of knowing that the server's certificate
> is signed by a trusted party, the connection is vulnerable to an MITM
> attack. And the only widely implemented reliable way of doing that
> is to use well-known and widely trusted CAs.
Yes, and those certificates can contain IP addresses. Not all
public CAs issue them, but some do.
> > I'm not sure what clients you're talking about, but for the clients
> > I am aware of, this would be somewhere between a broken experience
> > and an anti-pattern. For example, in Web clients, because the origin
> > includes the scheme, treating https:// URIs as http:// URIs will have
> > all sorts of negative side effects, such as making cookies unavailable
> > etc. For non-Web clients such as email and calendar, having any
> > kind of overridable warning increases the risk that people will
> > click through those warnings and expose their sensitive information
> > such as passwords, which is why many clients are moving away from
> > this kind of UI.
> UI design is a tricky art, and I agree that some users might see (or
> type) https:// in a field and assume that the connection is secure.
In the Web context this is not primarily a UI issue; web client
security mostly does not rely on the user looking at the URL (and in
fact many clients, especially mobile ones, conceal the URL). Rather,
they automatically enforce partitioning between insecure (http) and
secure (https) contexts, and therefore having a context which is
neither secure nor insecurecreates real challenges. Let me give you
two examples:
* Browsers block active "mixed content": _javascript_ from http origins
loaded into an https origin. In the scenario you posit where we
treat https from TLS 1.1 as "insecure", then if the target server for
some reason gets configured as TLS 1.1, then the client would have to
block it, creating breakage
* Cookies can be set to be secure only. Here again, if you have
a situation in which some of your servers support TLS 1.2
and others TLS 1.1, then you can get breakage where cookies
are not sent.
> But I think it's possible for UI designs to be more informative and less
> likely to be misunderstood, if the designers understand why it's
> important. I also think that IETF is on thin ice if we think we're
> in a better position than UI designers to decide what effectively
> informs users and allows them to make effective choices, across all
> devices and use cases.
I'm not suggesting that the IETF design UI.
We're getting pretty far into the weeds here, but I can tell you is
that the general trend in this area -- especially in browsers but also
in some mail and calendar clients -- is to simply present an error and
to make overriding that error difficult if not impossible. This is
informed by a body of research [0] that indicates that users are too
willing to override these warnings even in dangerous settings.
-Ekr
[0] For instance:
https://www.usenix.org/legacy/events/sec09/tech/full_papers/sunshine.pdf
Thanks for your note. I think it's clear we see things differently,
and I don't think it's that useful to get into an extended back and
forth on those points. Accordingly I've done a fair bit of trimming to
focus on the points where I think you may have misunderstood me
(perhaps due to unclear writing on my part).
On Fri, Nov 27, 2020 at 7:39 PM Keith Moore <moore@xxxxxxxxxxxxxxxxxxxx> wrote:
> On 11/27/20 9:53 PM, Eric Rescorla wrote:
> > This
> > would be true in any case, but is doubly true in the case of COMSEC
> > protocols: What we want is to be able to guarantee a certain mimimal
> > level of security if you're using TLS, but having weaker versions in
> > play makes that hard, and not just for the un-upgraded people because
> > we need to worry about downgrade attacks.
>
> This sort of sounds like a marketing argument. Yes, in some sense we'd
> like for "TLS" to mean "you're secure, you don't have to think about it"
> but more realistically TLS 1.0, 1.1, 1.2, and 1.3 each provide different
> countermeasures against attack (and in some cases, different drawbacks,
> like TLS 1.3 + ESNI being blocked) and you probably do need to be aware
> of those differences.
Well, I can't speak to how it sounds to you, but it's not a marketing
argument but rather a security one. This is easiest to understand in
the context of the Web, where you have a reference that contains one
bit: http versus https,and all https content is treated the same, no
matter which version of TLS it uses. In that context, having all the
supported versions meet some minimum set of security properties is
quite helpful. It's true that TLS 1.0, 1.1, 1.2,and 1.3 have different
properties, which is precisely why it's desirable to disable versions
below 1.2 so that the properties are more consistent.
> > > But some of those embedded devices do support TLS, even if it's old
> > > TLS (likely with self-signed certs... TLS really wasn't designed to
> > > work with embedded systems that don't have DNS names.)
> >
> > This is not correct. TLS takes no position at all on how servers
> > are authenticated. It merely assumes that there is some way to
> > validate the certificate and outsources the details to application
> > bindings. For instance, you could have an IP address cert.
>
> That's technically correct of course, but I think you know what I
> mean. Without a reliable way of knowing that the server's certificate
> is signed by a trusted party, the connection is vulnerable to an MITM
> attack. And the only widely implemented reliable way of doing that
> is to use well-known and widely trusted CAs.
Yes, and those certificates can contain IP addresses. Not all
public CAs issue them, but some do.
> > I'm not sure what clients you're talking about, but for the clients
> > I am aware of, this would be somewhere between a broken experience
> > and an anti-pattern. For example, in Web clients, because the origin
> > includes the scheme, treating https:// URIs as http:// URIs will have
> > all sorts of negative side effects, such as making cookies unavailable
> > etc. For non-Web clients such as email and calendar, having any
> > kind of overridable warning increases the risk that people will
> > click through those warnings and expose their sensitive information
> > such as passwords, which is why many clients are moving away from
> > this kind of UI.
> UI design is a tricky art, and I agree that some users might see (or
> type) https:// in a field and assume that the connection is secure.
In the Web context this is not primarily a UI issue; web client
security mostly does not rely on the user looking at the URL (and in
fact many clients, especially mobile ones, conceal the URL). Rather,
they automatically enforce partitioning between insecure (http) and
secure (https) contexts, and therefore having a context which is
neither secure nor insecurecreates real challenges. Let me give you
two examples:
* Browsers block active "mixed content": _javascript_ from http origins
loaded into an https origin. In the scenario you posit where we
treat https from TLS 1.1 as "insecure", then if the target server for
some reason gets configured as TLS 1.1, then the client would have to
block it, creating breakage
* Cookies can be set to be secure only. Here again, if you have
a situation in which some of your servers support TLS 1.2
and others TLS 1.1, then you can get breakage where cookies
are not sent.
> But I think it's possible for UI designs to be more informative and less
> likely to be misunderstood, if the designers understand why it's
> important. I also think that IETF is on thin ice if we think we're
> in a better position than UI designers to decide what effectively
> informs users and allows them to make effective choices, across all
> devices and use cases.
I'm not suggesting that the IETF design UI.
We're getting pretty far into the weeds here, but I can tell you is
that the general trend in this area -- especially in browsers but also
in some mail and calendar clients -- is to simply present an error and
to make overriding that error difficult if not impossible. This is
informed by a body of research [0] that indicates that users are too
willing to override these warnings even in dangerous settings.
-Ekr
[0] For instance:
https://www.usenix.org/legacy/events/sec09/tech/full_papers/sunshine.pdf
-- last-call mailing list last-call@xxxxxxxx https://www.ietf.org/mailman/listinfo/last-call