Re: get_random_bytes returns bad randomness before seeding is complete

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Stephan's driver, the HAVEGE system & several others purport to
extract entropy from a series of timer calls. Probably the best
analysis is in the Mcguire et al. paper at
https://static.lwn.net/images/conf/rtlws11/random-hardware.pdf & the
simplest code in my user-space driver at
https://github.com/sandy-harris/maxwell The only kernel-space code I
know of is Stephan's.

If the claim that such calls give entropy is accepted (which I think
it should be) then if we get one bit per call, need 100 or so bits &
space the calls 100 ns apart, loading up a decent chunk of startup
entropy takes about 10,000 ns or 10 microseconds which looks like an
acceptable delay. Can we just do that very early in the boot process?

Of course this will fail on systems with no high-res timer. Are there
still some of those? It might be done in about 1000 times as long on a
system that lacks the realtime library's nanosecond timer but has the
Posix standard microsecond timer, implying a delay time in the
milliseconds. Would that be acceptable in those cases?



[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]

  Powered by Linux