drbg using jitterentropy_rng causes huge latencies

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello!

On an embedded board based on ARM iMX6UL we observed latencies of over 60ms,
during tests using IPsec (strongSwan), that are caused by drbg using jitterentropy_rng.

These latencies cause various problems for our system (watchdog is not triggered in time,
loss of synchronization of our communication system,...).

In the past, before commit https://github.com/torvalds/linux/commit/97f2650e504033376e8813691cb6eccf73151676
the Jitter RNG was only used if get_random_bytes() did not have sufficient entropy available.
Before updating our kernel from 5.4.180 to 5.4.205 (but behavior of current kernel 6.0-rc4 seems to be the same)
we did not notice these latencies caused by the Jitter RNG, because it was never actually used by our
system (get_random_bytes() was always ready when seeding drbg).

We need to disable the Jitter RNG, which is not possible because the config option
CRYPTO_JITTERENTROPY is selected by CRYPTO_DRBG. The usage of Jitter RNG in drbg is only enforced in FIPS mode.
So I propose to decouple these config options, as drbg does not need/enforce the usage of the Jitter RNG
if not in FIPS mode. This would enable slower systems to use drbg without the latencies cause by Jitter RNG.

We maintain our own set of kernel patches specific to our needs, but I think that this change would also be
useful for others to be able to use drbg on slower systems without jitterentropy_rng.

Would there be any chance to get such a patch merged?
Or could the Jitter RNG be optimized to not cause such latencies?

Thanks,
Benjamin




[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]
  Powered by Linux