On Mon, Jul 29, 2002 at 06:13:08PM +0000, daw@mozart.cs.berkeley.edu wrote: > On the other hand, the idea of combining many entropy sources using > a cryptographic hash is a good one. If this is used for cryptographic > purposes, I'd just like to see some more reliably-unpredictable sources in > there, if it were me. However, maybe it's good enough for your purposes. > I don't know. > I collect online resources on how to generate cryptographic-quality > unpredictable pseudorandomness. The term "cryptographic-quality unpredictable pseudorandomness" is probably much better than the term "entropy" when talking about these issues (I note that you prefer the former). Some relatively recent work has shown that "entropy" in the sense of "Shannon entropy" may not be the idea measure for randomness for many uses. I've listed a few papers which dip into this area below. For example, the statement in the sci.crypt FAQ: We can measure how bad a key distribution is by calculating its entropy. This number E is the number of ``real bits of information'' of the key: a cryptanalyst will typically happen across the key within 2^E guesses. E is defined as the sum of -p_K log_2 p_K, where p_K is the probability of key K. is actually wrong; 2^E may be an arbitrary large underestimate of the expected number of guesses. (Thankfully, for those designing crypto systems based on this assumption.) The moral of the story is that it may pay to think about what you are using your randomness for. David. 1) Guessing and Entropy, Massey, Proc. IEEE Int. Symp. on Info Th., 1994. 2) An Inequality on Guessing and its Application to Sequential Decoding, Arikan, IEEE Trans. on Info. Th., 1996. 3) Two Sided Bounds on Guessing Moments, Dragomir and Boztas, preprint, 1997. 4) The Disparity between Work and Entropy in Cryptology, Pliam, preprint, 1999.