I strongly suspect there are systems, mainly low-powered embedded ones, where what I am about to suggest does not apply. I think, though, that it will work in many of today's Linux environments. Perhaps we might end up with two versions, one that does it the hard way (roughly, the current driver) & another that goes the easy way when the resources are available. I think the second one could be made quite a bit simpler & faster. Quoting an earlier post of mine in another thread: > Many CPUs, e,g. Intel, have an instruction that gives random numbers. Some systems have another hardware RNG. Some can add one using a USB device or Denker's Turbid (https://www.av8n.com/turbid/). Many Linux instances run on VMs so they have an emulated HWRNG using the host's /dev/random. There may be other choices. Stephan has proposed one, Havege have one & so on. > None of those is necessarily 100% trustworthy, though the published analysis for Turbid & for (one version of) the Intel device seem adequate to me. However, if you use any of them to scribble over the entire 4k-bit input pool and/or a 512-bit Salsa context during initialisation, then it seems almost certain you'll get enough entropy to block attacks. > They are all dirt cheap so doing that, and using them again later for incremental squirts of randomness, looks reasonable. Assuming you can inilialise with 4k or more reasonably random bits and throw in more of those bits as required (perhaps 640 whenever you extract 512?), you no longer need any run-time entropy estimation. The driver gets simpler.& faster. Analysis gets easier too; given that we have plausible input entropy, all the driver needs to be is an adequate mixer. Given that we are both hashing with SHA and mixing with Salsa, it seems obvious that it is. The big problem then is evaluating the sources to ensure they are indeed "reasonably random". The criteria need not be remarkably stringent, though. It would take an amazingly bad (deliberately compromised?) source to give 4k of "random" data without enough entropy for security. You do still need the code to extract entropy from interrupts, if only to make an attack by someone who has compromised a source harder (NSA getting to Intel's designers, Chinese Intelligence in their factories, etc.) This does not need run-time entropy estimation either, just design-time analysis. Mike has suggested getting rid of the locks in the driver, again making it faster & simpler. I'm not at all certain that would be a good idea in the current driver. In the simplified one suggested here, though, it would seem to make sense.