On Fri, Nov 15, 2013 at 11:56:14AM -0300, Steve Crocker wrote: > I've been watching this thread for a while. The idea of making it > harder without actually expecting the encryption to work seems like > an implicit admission of failure. There's a question which is being begged here when using the terminology "to work". What's your definition of success? For example, using D-H with no attempt to authenticate the endpoints means does not protect you against an active attacker who is carrying out a MITM attack. However, it does protect you against a passive attacker which is vaccuuming up all networking packets and either looking for keywords or storing it all in some big data warehouse in Utah. It forces said attacker to play MITM games, with all of the attendant costs of decrypting and re-encrypting all of the data streams, such that the attacker would consume a lot more power, and make it much harder for pervasive surveillance to be hidden in some tiny phone closet in some telecom's fiber room. The old world-view was if you couldn't protect against an active attacker, you might as well not do anything at all. So admitting failure with the old world view, which required us to "go big or go home" (with the result that in many cases vendors just went home, and left huge portions of internet traffic completely unencrypted) might be first step to wisdom. We've tried things the old way, with the result that IPSEC is so painful that it only gets used in a very restricted problem domain such as VPN's. Maybe it's time to try something new, that indeed only makes things harder for pervasive surveillance, without necessarily making it impossible against a directed attack. As far as I'm concerned, making things harder, even if it is not guaranteed to protect against all attack scenarios, is an example of something which works --- it works to protect against a specific threat model that appears to be a real one that we need to be worried about. > I think the right posture is to > make privacy via encryption the default at every level, or perhaps > even mandatory, and to expect it to work. Key management has to be > seamless and automatic, and the software and hardware have to be > trusted. I agree that we will probably need encryption at multiple levels; but I also think we can't assume that software and hardware "have" to be trusted. The design must be robust, and not fragile, which is one of ther reasons to have encryption at multiple layers. It also means that we should have at least one layer which doesn't assume (for example) that we can trust that all 600+ CA's which are authorized to certify all web sites in the world are trustworthy. That's not an admission of failure; that's an engineering design which is designed to robust, in the face of failures, which *WILL* happen. - Ted