Re: [TLS] Last Call: draft-hoffman-tls-additional-random-ext (Additional Random

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 26, 2010 at 12:12:36PM -0500, Marsh Ray wrote:
> On 4/23/2010 12:12 PM, Nicolas Williams wrote:
> > 
> > Irrelevant: if the random octets being sent don't add entropy (because
> > they are sent in cleartext) then this extension is completely orthogonal
> > to PRNG failures.
> 
> Even though they are sent in-the-clear, the random data do serve the
> same useful purpose as the existing [cs]_random data.

This is true, but those are already plenty large enough.  If the
argument is that the client and server random from the TLS hellos are
not large enough, then let's hear that argument.

> Because they are unpredictable they make offline precomputation harder.
> I think of it as adding entropy into offline computation, without adding
> any to the online computation.

The right term for this might be "confounding" (but then, it's a term
I've only seen in use in the context of Kerberos).

> I would think that the current 224-256 bits is enough to thwart offline
> attacks. The attacker would need something proportional to 2**224
> storage to store the results of his precomputation, no?

Right.

> Assume attacker can knock off 2**42 using rainbow table techniques (he
> has a 1024 unit cluster of CPUs which can each compute one result online
> every clock at 4GHz). So he needs to store something like 2**182 results
> from his precomputation. Assuming 1 bit per result, probably you'd need
> more. Raw HDDs are the cheapest form of mass storage today at $75/TB
> (10**12 bytes?). Such a system would cost
>              $ 5746858278247083218843801351813700000000000.00
> today. Of course those costs are likely to decline over time.

2^182 bits of storage is not remotely a realistic number: it's about the
number of atoms in the Solar system (if I have my math right).

> > I do believe it's mostly harmless; I am concerned that 2^16 max octets
> > seems like a bit much, possibly a source of DoS attacks.  I believe it's
> > also useless.  As such I'm not opposed to it as an Experimental or even
> > Informational RFC.

Actually, I withdraw the above quoted comment: it's already the case
that a hello can bear large extensions.

> There is a danger with this proposal. In no way do I mean to suggest
> that Paul has any unstated motivations here.
> 
> One aspect of saying that a data area is random is saying that the RFCs
> can impose no restrictions on it. Allowing arbitrary unstructured
> "random" data in the protocol opens the door for private extensions to
> be added by various parties.

I also don't think that Paul intends that.

> For example, it appears that 4 of the 32 bytes originally specified for
> random data got repurposed for GMT leaving "this is GMT but the clock is
> not required to be right" in the spec.
> 
> Once a few more of these accumulate in the protocol without central
> coordination we end up with incompatibilities that the IETF process can
> no longer prevent.

Yes, but since we do need nonces in our protocol, I think this is a
risk we have to live with.  What you are arguing for, IIUC, is that
nonces shouldn't be extremely large, but just right -- I agree with
this.

Nico
-- 
_______________________________________________
Ietf mailing list
Ietf@xxxxxxxx
https://www.ietf.org/mailman/listinfo/ietf

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]