Re: [PATCH] random: add blocking facility to urandom

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05.09.2011 04:36:29, +0200, Sandy Harris <sandyinchina@xxxxxxxxx> wrote:

Hi Sandy,

> On Fri, Sep 2, 2011 at 10:37 PM, Jarod Wilson <jarod@xxxxxxxxxx> wrote:
> 
>> Certain security-related certifications and their respective review
>> bodies have said that they find use of /dev/urandom for certain
>> functions, such as setting up ssh connections, is acceptable, but if and
>> only if /dev/urandom can block after a certain threshold of bytes have
>> been read from it with the entropy pool exhausted. ...
>>
>> At present, urandom never blocks, even after all entropy has been
>> exhausted from the entropy input pool. random immediately blocks when
>> the input pool is exhausted. Some use cases want behavior somewhere in
>> between these two, where blocking only occurs after some number have
>> bytes have been read following input pool entropy exhaustion. Its
>> possible to accomplish this and make it fully user-tunable, by adding a
>> sysctl to set a max-bytes-after-0-entropy read threshold for urandom. In
>> the out-of-the-box configuration, urandom behaves as it always has, but
>> with a threshold value set, we'll block when its been exceeded.
> 
> Is it possible to calculate what that threshold should be? The Yarrow
> paper includes arguments about the frequency of rekeying required to
> keep a block cipher based generator secure. Is there any similar
> analysis for the has-based pool? (& If not, should we switch to a
> block cipher?)

The current /dev/?random implementation is quite unique. It does not
seem to follow "standard" implementation like Yarrow. Therefore, I have
not seen any analysis about how often a rekeying is required.

Switching to a "standard" implementation may be worthwhile, but may take
some effort to do it right. According to the crypto folks at the German
BSI, /dev/urandom is not allowed for generating key material precisely
due to the non-blocking behavior. It would be acceptable for BSI to use
/dev/urandom, if it blocks after some threshold. Therefore, considering
the patch from Jarod is the low-hanging fruit which should not upset
anybody as /dev/urandom behaves as expected per default. Moreover, in
more sensitive environments, we can use /dev/urandom with the
"delayed-blocking" behavior where using /dev/random is too restrictive.
> 
> /dev/urandom should not block unless both it has produced enough
> output since the last rekey that it requires a rekey and there is not
> enough entropy in the input pool to drive that rekey.

That is exactly what this patch is supposed to do, is it not?
> 
> But what is a reasonable value for "enough" in that sentence?

That is a good question. I will enter a discussion with BSI to see what
"enough" means from the German BSI. After conclusion of that discussion,
we would let you know.


Thanks
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]

  Powered by Linux