Re: jitterentropy vs. simulation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 04/12/2023 12:06, Benjamin Beichler wrote:
Am 01.12.2023 um 19:35 schrieb Johannes Berg:
[I guess we should keep the CCs so other see it]

Looking at the stuck check it will be bogus in simulations.

True.

You might as well ifdef that instead.

If a simulation is running insert the entropy regardless and do not compute the derivatives used in the check.

Actually you mostly don't want anything inserted in that case, so it's
not bad to skip it.

I was mostly thinking this might be better than adding a completely
unrelated ifdef. Also I guess in real systems with a bad implementation
of random_get_entropy(), the second/third derivates might be
constant/zero for quite a while, so may be better to abort?
Maybe dump question: could we simply implement a timex.h for UM which delegates in non-timetravel mode to the x86 variant

Sounds reasonable.

and otherwise pull some randomness from the host or from a file/pipe configurable from the UML commandline for random_get_entropy()?

Second one.

We can run haveged in pipe mode and read from the pipe. Additionally, this will allow deterministic simulation if need be. You can record the haveged output and use it for more than one simulation.


I would say, if the random jitter is truly deterministic for a simulation, that seems to be good enough.

Said that, it would be nice to be able to configure all random sources to pull entropy from some file that we are able to configure from the command line, but that is a different topic.


In any case, I couldn't figure out any way to not configure this into
the kernel when any kind of crypto is also in ...

johannes








--
Anton R. Ivanov
https://www.kot-begemot.co.uk/




[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]
  Powered by Linux