Am Freitag, 3. Januar 2025, 18:23:19 Mitteleuropäische Normalzeit schrieb Francesco Valla: Hi Francesco, > > ====== > > > > > > Now given the description, what can you do? I would apply the following > > steps: > > > > 1. measure whether the timer your system has is really a high-res timer > > (the higher the resolution, the higher the detected variations and thus > > the entropy) > > Resolution reported by clock_getres() is 1ns. Is this sufficient? > It should, but it is in relationship to the complexity of the CPU itself. > > 2. assuming that test 1 shows a high res timer, reduce the OSR back to 1 > > (CRYPTO_JITTERENTROPY_OSR) and measure the entropy rate - > > Turned out my system already had the OSR set to 1, since CONFIG_CRYPTO_FIPS > was set to N. > > > 3. if test 2 shows insufficient entropy, increase the amount of memory > > (CRYPTO_JITTERENTROPY_MEMSIZE_*) and measure the entropy rate > > > > > > > > The tool for measuring the entropy rate is given in [1] - check the README > > as you need to enable a kernel config option to get an interface into the > > Jitter RNG from user space. As you may not have the analysis tool, you > > may give the data to me and I can analyze it. > > Here are the results (with default parameters for processdata.sh: > ... > min(H_original, 4 X H_bitstring): 3.168741 That last value, the min-value, is the key: it must be larger than 1/OSR - when it gets close to 1/OSR, the health tests start to flag errors once in a while (with a probability rate of around 2**-30 time stamps). So, you have OSR set to 1 which is already the lowest value supported by the Jitter RNG. Thus, there is unfortunately not much more you can do to increase the performance during boot time. I thought OSR was set to 3 in your environment. > min(H_original, 8 X H_bitstring): 4.473812 Ciao Stephan