Re: [PATCH v36 00/13] /dev/random - a new approach

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 19 Oct 2020 21:28:50 +0200
Stephan Müller <smueller@xxxxxxxxxx> wrote:
[...]
> * Sole use of crypto for data processing:
[...]
>  - The LRNG uses only properly defined and implemented cryptographic
>    algorithms unlike the use of the SHA-1 transformation in the
> existing /dev/random implementation.
> 
>  - Hash operations use NUMA-node-local hash instances to benefit large
>    parallel systems.
> 
>  - LRNG uses limited number of data post-processing steps
[...]
> * Performance
> 
>  - Faster by up to 75% in the critical code path of the interrupt
> handler depending on data collection size configurable at kernel
> compile time - the default is about equal in performance with
> existing /dev/random as outlined in [2] section 4.2.

[...]
>  - ChaCha20 DRNG is significantly faster as implemented in the
> existing /dev/random as demonstrated with [2] table 2.
> 
>  - Faster entropy collection during boot time to reach fully seeded
>    level, including on virtual systems or systems with SSDs as
> outlined in [2] section 4.1.
> 
> * Testing
[...]

So we now have 2 proposals for a state-of-the-art RNG, and over a month
without a single comment on-topic from any `get_maintainer.pl`

I don't want to emphasise the certification aspects so much. The
interrelation is rather that those certifications require certain code
features, features which are reasonable per se. But the current code is
lagging way behind.

I see the focus namely on performance, scalability, testability and
virtualisation. And it certainly is an advantage to use the code
already present under crypto, with its optimisations, and not rely
on some home brew.

Can we please have a discussion about how to proceed?
Ted, Greg, Arnd: which approach would you prefer?

	Torsten




[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]

  Powered by Linux