Jarod Wilson wrote:
Matt Mackall wrote:
On Mon, 2011-06-13 at 18:06 -0400, Jarod Wilson wrote:
Many server systems are seriously lacking in sources of entropy,
as we typically only feed the entropy pool by way of input layer
events, a few NIC driver interrupts and disk activity. A non-busy
server can easily become entropy-starved. We can mitigate this
somewhat by periodically mixing in entropy data based on the
delta between multiple high-resolution clocksource reads, per:
https://www.osadl.org/Analysis-of-inherent-randomness-of-the-L.rtlws11-developers-okech.0.html
Additionally, NIST already approves of similar implementations, so
this should be usable in high-securtiy deployments requiring a
fair chunk of available entropy data for frequent use of /dev/random.
So, mixed feelings here:
Yes: it's a great idea to regularly mix other data into the pool. More
samples are always better for RNG quality.
Maybe: the current RNG is not really designed with high-bandwidth
entropy sources in mind, so this might introduce non-negligible overhead
in systems with, for instance, huge numbers of CPUs.
The current implementation is opt-in, and single-threaded, so at least
currently, I don't think there should be any significant issues.
I stand corrected. Hadn't considered the possible issues with doing a
regular preempt_disable() and __get_cpu_var() on a system with tons of
cpus. (I'm still not sure exactly what the issues would be, but I think
I see the potential for issues of some sort.)
--
Jarod Wilson
jarod@xxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html