Re: [boot-time] jent_mod_init on beagleplay (was RE: [boot-time] [RFC] analyze-initcall-debug.py - a tool to analyze the initcall debug output

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am Donnerstag, 2. Januar 2025, 11:33:08 CET schrieb Francesco Valla:

Hi Francesco,

> That would be wonderful! Whenever you have the time, please let me know what
> analysis you need.
> 

Ok, some background: the Jitter RNG technically has 2 noise sources which are 
sampled concurrently:

1. variations of the execution of CPU instructions

2. variations of memory access times

For (1) the Jitter RNG has a fixed set of instructions it performs the 
execution time measurements for: the SHA-3 conditioning operation 
(specifically the Keccak sponge function). For that, it performs a given set 
of Keccak operations.

For (2) the Jitter RNG allocates a fixed set of memory and simply reads / 
writes data there and measures this timing.


For (1), the more instructions are samples, the higher the entropy gathered. 
This means more time is required to sample that entropy. I.e. when you 
increase the number of measured Keccak operations, you get more entropy.

For (2), when the memory is large enough to "spill over" into the next type of 
memory (from L1 to L2 to L3 to RAM), the higher the entropy gathered.


So, for (2), to get more entropy, is invariant from the execution time. But 
for (1), the entropy rate depends on the execution time.


Thus, what you want is to try to reduce the time spend for (1).


The key now is that the overall entropy rate the Jitter RNG requires for 
functioning must be such that when gathering 256 bits of data from it, it 
contains 256 bits of entropy.


Now, there are 2 "knobs" to turn via Kconfig:

- the oversampling rate (OSR): given that the individual number of rounds for 
(1) and the number of accesses for (2) are kept the same, the OSR causes the 
Jitter RNG to multiply the round counts. For example, the baseline with OSR == 
1 is that for gathering 256 bits of entropy, 256 times both noise sources are 
sampled. For an OSR of, say, 3, to get 256 bits of entropy, 3 * 256 = 768 
times both noise sources are sampled. This value was changed from 1 to 3 for 
6.11 because there were reports on some CPUs that the Jitter RNG did not 
produce sufficient entropy - most CPUs, however, can perfectly live with OSR 
== 1.

- the amount of memory for (2) can be increased. The default is 2kBytes which 
usually means that the L1 can fully handle the accesses.


======


Now given the description, what can you do? I would apply the following steps:

1. measure whether the timer your system has is really a high-res timer (the 
higher the resolution, the higher the detected variations and thus the 
entropy)

2. assuming that test 1 shows a high res timer, reduce the OSR back to 1 
(CRYPTO_JITTERENTROPY_OSR) and measure the entropy rate - 

3. if test 2 shows insufficient entropy, increase the amount of memory 
(CRYPTO_JITTERENTROPY_MEMSIZE_*) and measure the entropy rate



The tool for measuring the entropy rate is given in [1] - check the README as 
you need to enable a kernel config option to get an interface into the Jitter 
RNG from user space. As you may not have the analysis tool, you may give the 
data to me and I can analyze it.


More details on tuning the Jitter RNG is given in [2] - it discusses to the 
user space variant, but applies to kernel as well.

[1] https://github.com/smuellerDD/jitterentropy-library/tree/master/tests/raw-entropy/recording_runtime_kernelspace

[2] https://github.com/smuellerDD/jitterentropy-library/tree/master/tests/raw-entropy#approach-to-solve-insufficient-entropy

Ciao
Stephan






[Index of Archives]     [Gstreamer Embedded]     [Linux MMC Devel]     [U-Boot V2]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux ARM Kernel]     [Linux OMAP]     [Linux SCSI]

  Powered by Linux