On Fri, Sep 2, 2011 at 10:37 PM, Jarod Wilson <jarod@xxxxxxxxxx> wrote: > Certain security-related certifications and their respective review > bodies have said that they find use of /dev/urandom for certain > functions, such as setting up ssh connections, is acceptable, but if and > only if /dev/urandom can block after a certain threshold of bytes have > been read from it with the entropy pool exhausted. ... > > At present, urandom never blocks, even after all entropy has been > exhausted from the entropy input pool. random immediately blocks when > the input pool is exhausted. Some use cases want behavior somewhere in > between these two, where blocking only occurs after some number have > bytes have been read following input pool entropy exhaustion. Its > possible to accomplish this and make it fully user-tunable, by adding a > sysctl to set a max-bytes-after-0-entropy read threshold for urandom. In > the out-of-the-box configuration, urandom behaves as it always has, but > with a threshold value set, we'll block when its been exceeded. Is it possible to calculate what that threshold should be? The Yarrow paper includes arguments about the frequency of rekeying required to keep a block cipher based generator secure. Is there any similar analysis for the has-based pool? (& If not, should we switch to a block cipher?) /dev/urandom should not block unless both it has produced enough output since the last rekey that it requires a rekey and there is not enough entropy in the input pool to drive that rekey. But what is a reasonable value for "enough" in that sentence? -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html