[RFC] random: use blake2b instead of blake2s?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The original random.c used a 4k-bit input pool, I think mainly because
sometimes (e.g. for a large RSA key) we might need up to 4k of
high-grade output. The current driver uses only a 512-bit hash
context, basically a Yarrow-like design. This is quite plausible since
we trust both the hash and the chacha output mechanism, but it seems
to me there are open questions.

One is that the Yarrow designers no longer support it; they have moved
on to a more complex design called Fortuna, so one might wonder if the
driver should use some Fortuna-like design. I'd say no; we do not need
extra complexity in the kernel & it is not clear there'd be a large
advantage.

Similarly, there's a Blake 3 that might replace Blake 2 in the driver;
the designers say it is considerably faster. I regard that as an open
question, but will not address it here.

What I do want to address is that the Yarrow paper says the
cryptographic strength of the output is at most the size of the hash
context, 160 bits for them & 512 for our current driver. Or, since we
use only 256 bits to rekey, our strength might be only 256. These
numbers are likely adequate, but if we can increase them easily, why
not?

Blake comes in two variants, blake2s and blake2b; presumably b and s
are for big & small. The kernel crypto library has both & the driver
currently uses 2s. 2s has 512-bit context (16 32-bit words) and can
give up to 256-bit output. 2b has 1024 (16 64-bit words) and can do
512 out.

To me, it looks like switching to 2b would be an obvious improvement,
though not at all urgent. Benchmarks I've seen seem to say it is
faster on 64-bit CPUs and slower on 32-bit ones, but neither
difference is huge.



[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]
  Powered by Linux